Why Won’t They Just Click? Understanding Older Adults’ Hesitation Toward AI
In-Depth Research · Technology & Society
Why Won't They Just Click?
Understanding why older adults hesitate to embrace artificial intelligence — and why their hesitation may be the wisest response in the room.
Picture this: a 71-year-old retired school principal sits at her kitchen table. She has managed classrooms of thirty children, navigated three decades of bureaucratic reform, raised a family, and learned to use email at a time when it felt like witchcraft. Her daughter is trying to show her ChatGPT. "Just type what you want to know," her daughter says, as though the difficulty were simply a matter of knowing where the keys are. The principal listens politely, nods, and later quietly closes the laptop. She is not afraid of the computer. She is afraid of something harder to name.
Her hesitation is shared by tens of millions. And yet every time it appears in public conversation, it gets the same treatment: a shrug, a label, a knowing smile. Technophobia. A digital divide. A generational lag. These phrases travel efficiently precisely because they explain nothing — and explaining nothing, they require no response beyond impatience.
The data behind the gap is striking on its own terms. According to joint research from the University of Michigan and AARP, only about six percent of adults over 65 have ever used a tool like ChatGPT, compared to roughly forty-three percent of those under 30. That is not a minor statistical divergence. It is a chasm — a generational fault line running through every dinner table conversation where a well-meaning adult child tries to hand a parent a phone and says, with cheerful impatience, "try the AI."
But the numbers, arresting as they are, invite the wrong question. The question is not: how do we get older adults to click? The question is: what are they responding to, and are they right?
What looks like reluctance from the outside is, from the inside, a perfectly rational, multi-layered evaluation conducted by people who have lived long enough to know what happens after the revolution arrives.
To understand the hesitation honestly requires resisting the temptation of a single explanation. The barriers are psychological and biological, cultural and economic, moral and structural. They are not separate threads. They are a braid. And to understand them — individually, and together — is to arrive at a conclusion that the technology industry has been slow to accept: the hesitation of older adults before AI is not a problem to be solved. It is a signal worth listening to.
The Mind That Pauses Before It Clicks: Psychological Barriers
Start where hesitation actually begins: inside the mind. Psychologists who study technology adoption have identified a condition they call technology anxiety — not a vague nervousness around screens, but a targeted, specific dread of making an irreversible error in an unfamiliar system. The fear is not of the device itself. It is of consequences: of clicking something that cannot be unclicked, of sending a message to the wrong person, of triggering a process whose outcome cannot be predicted or undone.
This fear is not irrational. It is learned. Younger users who grew up surrounded by digital technology developed, often unconsciously, a tolerance for trial and error — a faith, absorbed through hundreds of small recoveries, that most mistakes are fixable. They click and swipe with the casual confidence of children learning to fall on grass. Older adults often did not have access to that developmental window. Many first encountered computers in their fifties or sixties, when the stakes of mistakes felt higher and the tolerance for confusion was lower. When these same adults approach an AI interface whose logic is opaque and whose outputs are unpredictable, the internal voice that asks what if I break something? is not anxiety misfiring. It is experience speaking.
Layered on top of this is something researchers in gerontechnology have identified as even more insidious: internalized ageism. This is the process by which a culture's prejudices about who technology belongs to — overwhelmingly, implicitly, persistently: the young — are absorbed into a person's own self-concept. When every advertisement for an AI tool features a twenty-something professional, when every onboarding tutorial assumes the reader already knows what a "prompt" is, the unspoken message arrives clearly: this is not for you.
Research synthesized across gerontechnology literature, including foundational work by Khosravi, Rezvani, and Wiewiora (2016), demonstrates that this unspoken message erodes self-efficacy — the belief that one is capable of mastering something new. When an older adult says "I'm just not a tech person," it is rarely a statement of objective fact. It is the repetition of what a culture has said long enough that it starts to feel true. The cycle is self-sealing: feeling excluded leads to non-engagement, which confirms the sense of exclusion, which deepens the reluctance.
The voice that says "I'm not a tech person" is rarely original. It was borrowed from a culture that forgot to invite you to the table.
But here is what tends to get lost when these psychological barriers are discussed: much of the hesitation has nothing to do with self-doubt or anxiety at all. It is the product of a clear-eyed, accurate assessment of what artificial intelligence actually is and does. Generative AI systems produce fluent, confident, and sometimes entirely fabricated responses — a failure mode researchers call "hallucination." For a younger user experimenting out of curiosity, an AI hallucination is an amusing quirk. For a seventy-year-old asking about a medication interaction or the legal implications of a financial instrument, there is no margin for confident misinformation. The older adult who looks at this technology and thinks I cannot trust it with something that matters is not failing to understand AI. They are understanding it with uncomfortable precision.
Frederick Davis's Technology Acceptance Model (TAM), one of the most-cited frameworks in human-computer interaction research, identifies perceived ease of use as the decisive factor in technology adoption among older adults — more powerful than perceived benefit, more powerful than social pressure (Davis, 1989). And by that measure, most AI tools have a structural problem. They are designed by young, tech-fluent engineers for young, tech-fluent users. They reward the curious experimenter and frustrate the careful, methodical learner. The assumption of prior knowledge is pervasive. The AI industry's usability problem is not a user problem. It is a design problem that the industry has chosen to assign to users.
Real Bodies, Real Barriers: Neuroscience and the Physiology of Aging
There is a category of barriers that deserves to be stated plainly, without euphemism or apology: aging bodies and aging brains encounter AI differently, and not because of any moral or intellectual failing.
Working memory — the mental workspace where we hold and manipulate new information in real time — changes measurably with age. Processing speed — the pace at which the brain handles novel stimuli and multistep instructions — slows in ways documented across decades of cognitive science. Learning an entirely new interaction paradigm, one as genuinely strange as having a typed or spoken conversation with a machine that answers unpredictably, places real cognitive demands on any learner. For older adults, those demands arrive without scaffolding. Nothing in the lived experience of someone who grew up in the mid-twentieth century prepared them to "prompt" a machine (Vaportzis, Clausen, & Gow, 2017; Mitzner et al., 2010).
Sensory and motor realities compound this picture. Vision dims with age: small fonts, low-contrast interfaces, and cluttered screen layouts are not minor aesthetic inconveniences — they are functional barriers. Hearing narrows: voice-based AI systems, frequently marketed as the great democratizing equalizer of AI interaction, have a well-documented and underaddressed flaw — they systematically perform worse on older voices, regional accents, and the vocal qualities shaped by a lifetime. When a person carefully articulates a request to an AI assistant and is misunderstood three times in a row, trust does not merely decline. It evaporates.
The cruel irony: the same voice recognition AI that works perfectly for a twenty-five-year-old often fails for the seventy-year-old who needs it most.
And yet here is where the picture complicates itself in a way that demands attention: older adults are already living with artificial intelligence every single day. They just don't call it that. The predictive text on their smartphones. The GPS rerouting around traffic in real time. The fraud-detection algorithm at their bank that silently flags an unusual transaction before any human reviews it. The smart hearing aid that filters ambient noise using machine learning. These are all AI systems, and older adults use them without complaint — because they work, they are invisible, and they solve a concrete problem without demanding that the user understand or perform anything new.
The resistance, researchers consistently find, is not to AI per se. It is to AI that announces itself — that demands engagement, requires a new vocabulary, and asks users to adopt an entirely new mode of interaction with no prior roadmap. Call it "artificial intelligence" and you trigger wariness. Embed the same technology in a familiar, functional tool, and it passes without comment. Older adults are not technophobic. They are, in a precise and meaningful sense, label-phobic — and that phobia is earned, because the label has almost always arrived with fine print.
Culture, Identity, and the Weight of a Lifetime: Social Dimensions
To understand why older adults hesitate at the threshold of AI requires stepping outside the individual and looking at the broader cultural landscape in which decisions are made.
Researchers at AARP and elsewhere have documented what is sometimes called the experimentation gap: a fundamental generational difference in how people approach unfamiliar technology. Younger users often approach AI as a playground — they type absurd questions, laugh at unexpected answers, and learn through cheerful trial and error because the process itself is pleasurable. For many older adults, the orientation is different: pragmatic, transactional, centered on a simple and honest question — What, specifically, will this do for me that I cannot already do? When the answer is "it can make you more productive" or "it's the future," the response is equally reasonable: I am managing fine as it is. This is not stubbornness. It is efficient resource allocation by someone who knows that time, energy, and cognitive bandwidth are not infinite.
But the cultural reluctance runs deeper than pragmatism, and into territory that is genuinely existential. Many older adults have spent forty or fifty years building something — a professional identity, a body of expertise, a set of human skills honed through a lifetime of practice and failure and refinement. And AI arrives, in the popular imagination and often in its own marketing, as a system that can replicate or surpass those hard-won capacities in seconds.
For someone who has spent a lifetime becoming an expert, adopting AI can feel less like getting help and more like agreeing to become obsolete.
Beneath this identity concern lies something even deeper: a fear of losing the human in human relationships, particularly in the domains that matter most — healthcare, elder care, companionship, bereavement. Older adults have lived long enough to sit at bedsides, to receive comfort from a nurse's actual presence, to understand intuitively that a hand on a shoulder and an algorithm's output are not the same kind of thing, regardless of how accurate the algorithm becomes. When they hear about AI in healthcare, they do not simply hear efficiency. They hear the possible substitution of empathy with optimization. This is not Luddite panic. It is a moral claim about the nature of care — and it is one that technology enthusiasts have been too quick to dismiss.
Social dynamics reinforce these individual orientations in both directions. Peer skepticism creates and sustains norms of non-adoption: if everyone in your social circle views AI as a threat or a corporate trick, you will not be the first to defect from that consensus. But the reverse is equally true and, in some ways, more powerful. Research suggests that a single trusted peer — someone of your generation, your background, your experience — demonstrating a concrete, personal benefit from AI is worth more than a thousand corporate advertisements.
Voices from the Moral Center: Ethical and Spiritual Dimensions
The hesitation of older adults before artificial intelligence is not only a matter of psychology, practicality, or cultural identity. For millions of people, it touches something the secular vocabulary of technology adoption has difficulty naming: a sense of the sacred in human connection, and a moral instinct about what is being lost.
In June 2024, Pope Francis addressed the G7 Summit on Artificial Intelligence in Fasano, Italy — the first pontiff ever to address that body. His warning was direct: consequential decision-making "must always be left to the human person." He spoke of the risk of producing what he called a "technologized" life, one in which intelligent machines are mistakenly attributed capacities that belong properly and irreducibly to human beings — compassion, moral judgment, the capacity to suffer alongside another person. For older adults who have lived long enough to know the difference between a machine and a soul, this language was not abstract theology. It was a precise articulation of something they had already felt but had not found the words for.
In his message for the Paris AI Summit in February 2025, Pope Francis returned to these themes, urging that AI governance be centered on privacy, transparency, and reliability — precisely the values that older adults have been articulating, sometimes without the vocabulary but always with the instinct, since AI first entered their lives. And in his 2024 message for the 58th World Day of Social Communications, he reflected on the need for "artificial intelligence and the wisdom of the heart," framing the challenge not as technology versus humanity but as the question of how intelligence can be made genuinely human rather than merely efficient.
The wisest response to artificial intelligence may not be enthusiasm or refusal, but the question that has always accompanied genuine human progress: Is this making us more human, or less?
Decades earlier, Pope John Paul II, addressing the Pontifical Council for Health Pastoral Care in October 1998, cautioned against what he called the "myth of productivity" — a cultural tendency to measure human worth through output and efficiency, which diminishes the dignity of those who are no longer in the full productivity of their working years. The quiet pressure on older adults today to adopt AI — to "keep up," to "stay relevant," to prove their continued usefulness by learning new tools — carries a troubling undertone when examined honestly. It asks people to demonstrate their worth through efficiency at the very stage of life when the wisdom of slowing down deserves recognition, not remediation.
These moral dimensions are not peripheral to the discussion of AI adoption. For many older adults, they are the center of it.
Trust, Privacy, and the Fraud That Is Already Here
If you want to understand why trust is so fragile in this population, it helps to understand the environment in which that trust must be extended.
According to the University of Michigan's National Poll on Healthy Aging, seventy-four percent of adults over 50 report little or no trust in AI-generated health information. The reflexive response is to ask: how do we change their minds? The better question is: are they wrong? Given what we know about AI hallucination rates in clinical contexts, about the confidence with which these systems present invented medical information, about the documented failures of AI diagnostic tools when deployed outside controlled conditions — calibrated skepticism is not irrationality. It is an appropriate response to the actual, documented reliability of the technology.
Meanwhile, according to AARP's Fraud Watch Network, three-quarters of older Americans report having been targeted by cybercriminals. And the threat has grown more intimate and more frightening. AI-powered deepfake voice scams — in which a caller sounds exactly like a grandchild, speaking in panic, asking for money urgently — are not hypothetical scenarios. They are documented, they are proliferating, and they have extracted real money from real families. When an older adult hears "artificial intelligence" and feels something tighten in their chest, that response is not paranoia. It is pattern recognition. They have already met AI's dark side, wearing a familiar face and speaking in a borrowed voice.
Fear of AI is not ignorance. It is often the result of paying very close attention.
The "black box" problem compounds this distrust in a way AI proponents have been slow to reckon with. AI systems make consequential decisions — about what information to surface, what facts to present as true, what options to recommend — through processes that are opaque not just to users but often to the engineers who built them. For someone who has spent a lifetime in fields where conclusions must be traceable — in medicine, law, financial management, education — this opacity is a fundamental breach of epistemic trust.
There is a paradox here that researchers have found consistently: greater knowledge of AI does not necessarily produce greater trust. It often produces less. Users who understand how these systems actually work — who know about training data biases, about failure modes, about the ways AI can be weaponized for manipulation — have more reason to be cautious, not fewer. Informed skepticism, in this domain, scales with understanding. The older adult who has read enough to know what an AI hallucination is, what a deepfake voice can do, and what a terms-of-service agreement typically harvests — and who responds to all of that by pausing before clicking "accept" — is not behind the curve. They may be ahead of it.
The Costs Nobody Puts in the Brochure: Economic and Structural Barriers
Not all barriers to AI adoption are psychological or cultural. Some are the straightforward result of material conditions that no amount of curiosity or goodwill can overcome.
Many older adults live on fixed incomes. When the benefit of a new subscription service is genuinely uncertain — when the most honest argument its advocates can offer is that "it might be useful someday" — the rational decision is to wait. This is not financial timidity. It is financial literacy. When monthly income is defined and limited, the distinction between things that solve a problem you actually have and things that solve a problem someone else has decided you should want becomes very clear, very quickly.
Rural and lower-income older adults face an even more fundamental obstacle: broadband access. In large parts of the country, high-speed internet adequate for AI tools is simply unavailable, or available only at prices a fixed income cannot absorb. This is not a personal failing. It is a structural one — an infrastructure gap that policy has been slow to close, and that no amount of digital literacy training can bridge. Describing this as a "reluctance" problem misidentifies the cause entirely.
The gender dimension deserves explicit acknowledgment. In a statement to the 67th Commission on the Status of Women in March 2023, the Holy See noted that older women disproportionately lag behind in digital access and AI literacy — not as a reflection of interest or capacity, but as the accumulated result of structural inequalities: differential access to technical education across a lifetime, workforce experiences less likely to involve computers, and domestic roles that historically placed women at the periphery of technological adoption. This is not reluctance. This is systemic exclusion, and the distinction matters enormously for how we design our responses to it.
And even among those who are comfortably retired and personally at ease with their own relationship to AI, there is a genuine, outward-looking concern that deserves more than dismissal: What is this technology doing to the world my children and grandchildren will inherit? The displacement of human labor by AI is not an abstraction for people who spent their working lives watching industries transform. The question of what AI means for economic dignity, for the nature of work, for the social contract — these are moral questions, and older adults, holding the longest view, are in some ways best positioned to ask them seriously.
When the Tool Doesn't Fit the Hand: The Design Failure Nobody Will Name
The most honest explanation for why AI adoption lags among older adults is also the simplest — and it implicates the industry rather than the users: most AI tools were not designed with older adults in mind.
They were designed by young engineers, at companies whose internal culture skews young, tested on samples that skew younger still, and launched into a market that the industry's imagination — visible in its advertisements, its case studies, its conference speakers — implicitly treats as young by default. Co-design, the practice of involving intended users in the development process from the beginning, is rare in the technology industry. It is vanishingly rare when the intended users are over 65.
The results of this omission are predictable and well-documented. AI tools frequently address problems that older adults do not have, while ignoring the ones they do. The vocabulary they require — "prompt," "context window," "hallucination," "model," "token" — is a new literacy with no analog in any prior technological shift. Without patient, in-person, trusted guidance — not a help article, not a YouTube tutorial, not a customer service chatbot — the experience of learning AI is often demoralizing.
Older adults have seen enough hype cycles to know the difference between a revolution and a product launch.
Everett Rogers, whose work on the diffusion of innovations remains foundational to understanding technology adoption, was careful to distinguish between late adopters and resisters (Rogers, 2003). Late adopters are not resistant. They are discerning. They require more evidence before committing to an innovation, because they are appropriately aware of the costs of a wrong bet. Having lived through the hype cycles of the personal computer, the internet, social media, and the smartphone — having watched some promises delivered magnificently and others quietly abandoned — older adults have developed, naturally and sensibly, a discerning eye. Their skepticism is not born in the present. It is earned in the past.
The "innovator's dilemma," in this context, runs the other way. The industry that cannot make its technology legible and trustworthy to the most experienced, most patient, most stakes-aware cohort of users is the one with the design problem.
What the Data Actually Shows: A Portrait of Conditional Engagement
Here is where the picture most often painted of older adults and AI — the picture of wholesale resistance, of a generation left behind — runs headlong into the data.
Fifty-five percent of adults aged 50 and over have already used some form of AI technology, according to University of Michigan and AARP surveys. Among those who use voice assistant technology, eighty percent report genuine, tangible benefits for independent living — medication reminders, hands-free communication with family, information retrieval, real-time navigation assistance. These are not the numbers of a generation that has rejected the future. They are the numbers of a generation that is evaluating it — carefully, conditionally, and with their eyes open.
At the same time, fifty-three percent believe that AI may ultimately cause more harm than good if left unchecked. Read together, these numbers do not describe rejection. They describe what responsible technology adoption actually looks like: genuine engagement paired with an insistence on transparency, demonstrated value, and accountability. Informed, conditional acceptance. Not a refusal — a negotiation over terms.
Conditional acceptance is not half-hearted adoption. It is the only adoption that deserves to last.
And if we are honest about the landscape in which that negotiation is happening — where AI companies routinely overpromise and underexplain, where terms-of-service agreements run to tens of thousands of words that no one reads, where business models of data harvesting are obscured behind friendly interfaces, where the gap between what AI marketing claims and what AI systems reliably deliver remains substantial — then the older generation's demand for evidence and transparency is not a lag. It is a correction. The eighty-one percent who want more information before trusting AI are not behind the curve. In a world rushing forward with the enthusiasm of the early adopter, they may be ahead of it.
The Challenge Is Ours, Not Theirs
Let us be direct about where the burden of change actually sits.
The hesitation of older adults before artificial intelligence is multi-causal, multi-dimensional, and often wise. It is the product of lived experience with technologies that promised everything and delivered partially. It is the product of legitimate concern about privacy, fraud, and the erosion of human connection. It is the product of bodies and minds navigating real biological changes in interfaces designed without them in mind. It is the product of deep moral instincts — sometimes inarticulate, always genuine — about what it means to be human in a world that is outsourcing human capacities to machines faster than it has thought through the implications. And it is the product of economic and structural realities that no individual can overcome through motivation alone.
To the technology industry: the ask is not complicated, though the execution requires genuine commitment. Design for inclusion from the beginning — not as an accessibility checklist appended to a finished product, but as a design principle present in the earliest conversations. Involve older adults in co-design processes, not as test subjects but as participants and authorities on their own needs. Be radically transparent about what your systems can and cannot do, and about what happens to user data. Build trust the hard way — by earning it through reliability and honesty, not by arguing for it through marketing. Accessible design is not a niche accommodation for a small, difficult population. It is the mark of a product that has genuinely thought about its users.
To families and communities: the invitation is patience, and genuine demonstration rather than persuasion. If you want someone you love to try an AI tool, show them something specific and real. Help them with something that actually matters to them. Let them see it work, and let them see it fail, and be honest about both. Don't argue for the future — live in it alongside them, slowly, with the understanding that trust is built in small moments and cannot be rushed. One trusted person demonstrating a genuine benefit is worth infinitely more than a thousand advertisements.
The older generation did not grow up in a world that rewarded moving fast and breaking things. They grew up in a world that rewarded building things carefully, and making sure they lasted. That is not a deficiency. It is a disposition that an industry drunk on disruption might find, if it listened, to be exactly what it needs.
And to you, the reader — especially if you are somewhere in that 55-to-75 range, navigating your own ambivalent relationship with this technology — perhaps the most useful observation is this: your reluctance has been called many things. Fear. Stubbornness. Technophobia. Being left behind. But what if it is something else? What if it is discernment — the product of having seen enough technological revolutions to know that the first wave of enthusiasm always arrives with fine print? You have lived through the rise of personal computers. You watched the internet promise to connect everyone, then watched it connect everyone to scams, misinformation, and surveillance. You saw social media bring families together and then quietly pull the social fabric apart. You have earned the right to say: I will adopt what proves useful, and I will reject what harms me or the people I love.
That is not the behavior of someone who cannot learn. It is the behavior of someone who has learned a great deal.
The hesitation before the click is not a failure of nerve. It is a refusal to be rushed past something that deserves a second look. And in a world moving very fast, in directions it has not fully mapped, that refusal may turn out to be one of the most important things a human being can offer.
The question is not whether older adults will keep up with technology. The question is whether technology will learn to keep pace with humanity.
References
- AARP Fraud Watch Network. (2024). Fraud and older Americans: Annual report. AARP Public Policy Institute.
- AARP Research & University of Michigan National Poll on Healthy Aging. (2023–2024). Artificial intelligence and older adults: Usage, trust, and attitudes.
- Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Technology Acceptance Model — TAM]
- Holy See. (2023, March 9–13). Statement during the 67th Commission on the Status of Women. United Nations, New York.
- Khosravi, P., Rezvani, A., & Wiewiora, A. (2016). The impact of technology on older adults' social isolation. Computers in Human Behavior, 63, 594–603.
- Mitzner, T. L., Boron, J. B., Fausset, C. B., Adams, A. E., Charness, N., Czaja, S. J., Dijkstra, K., Fisk, A. D., Rogers, W. A., & Sharit, J. (2010). Older adults talk technology: Technology usage and attitudes. Computers in Human Behavior, 26(6), 1710–1721.
- Pew Research Center. (2024). Older adults and technology use. Washington, DC.
- Pope Francis. (2024, June). Address at the G7 Session on Artificial Intelligence. Fasano, Italy. Vatican City.
- Pope Francis. (2024). 58th World Day of Social Communications: Artificial intelligence and the wisdom of the heart — Towards a fully human communication. Vatican City.
- Pope Francis. (2025, February 7). Message for the Paris AI Summit. Vatican City.
- Pope John Paul II. (1998, October 31). Address to participants in the Conference of the Pontifical Council for Health Pastoral Care. Vatican City.
- Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.
- Vaportzis, E., Clausen, M. G., & Gow, A. J. (2017). Older adults perceptions of technology and barriers to interacting with tablet computers: A focus group study. Frontiers in Psychology, 8, 1687. DOI: 10.3389/fpsyg.2017.01687
Comments
Post a Comment