The trust gap: why you're more worried than the experts
The most useful number in the 2026 AI Index has nothing to do with model size or compute. It's the gap between AI experts and the public. Asked whether AI will improve how people do their jobs, 73% of experts said yes; only 23% of the public agreed. The same shape holds for the economy (69% vs 21%) and medical care (84% vs 44%). 64% of Americans expect AI to mean fewer jobs over the next twenty years, according to the Index's public opinion chapter.
This mismatch matters. The people building these systems trust them more than the people using them do. If you've felt out of step with the tech-optimist mood in the news, the Index says you're in the majority. Either the experts are picking up something the public can't see yet, or the public is picking up something the experts have stopped noticing. Both are possible.
What is actually happening to jobs
The 2026 jobs picture is not the mass-unemployment story that dominates social media. Damage is concentrated at the start of careers, not across the whole workforce.
The clearest signal in the Index: US software developers aged 22 to 25 are down nearly 20% in employment since 2024, while older developers continue to grow in number. Customer service shows the same shape. Entry-level roles, where AI can handle most routine work in seconds, are getting hit first. Mid-career professionals are largely fine. People trying to break in are finding fewer doors open. The Stanford report calls this the early-career labour squeeze.
Demand for AI skills is rising at the same time. AI-related skills now appear in 2.5% of US job postings, up 55% year-on-year and almost 300% over the decade. Postings asking for "agentic AI" experience went from negligible in 2024 to roughly 90,000 in 2025. Globally, 58% of workers say they use AI at work semi-regularly or more. In India, China, Nigeria, the UAE, Egypt and Saudi Arabia the figure is over 80%.
| Worry | What the 2026 Index actually shows | What it means for you |
|---|---|---|
| "AI will take my job" | Aggregate unemployment is not collapsing. Losses are concentrated in early-career roles and high-exposure tasks. | If you are already established, the bigger risk is your role being reshaped, not removed. |
| "Young people are stuck" | 22–25-year-old US software developers down ~20% since 2024. | Graduates need a different on-ramp than older siblings did. Internships, agent-era skills, working portfolios. |
| "I'll be obsolete in five years" | Workers least exposed to AI are seeing unemployment rise faster than those most exposed. | The economy as a whole is doing the squeezing, not just AI. Distance from AI is not safety. |
| "Nobody is hiring" | AI skill mentions in postings up 55% year-on-year. Agent-related postings ~90,000 in the US in 2025. | The market is tilting toward people who can use AI as a tool rather than avoid it. |
Your kids, school, and AI
Adoption has run ahead of adult readiness here more than anywhere else in the report. The Index's education chapter says more than four in five US high-school and college students use generative AI for school work: research, editing, brainstorming and now full assignments. Only half of middle and high schools have any AI policy at all. Just 6% of teachers think their school's policy is clear.
In practice, your child is using AI in ways their teachers can't reliably detect or judge yet. China and the United Arab Emirates have moved fastest, with mandatory AI education from the 2025–26 school year. Most countries are leaving it to individual schools, which means the quality of guidance varies sharply between classrooms.
Cheating is the surface worry. The deeper one is that students are skipping the parts of learning that build thinking: getting stuck on a sentence, working through a wrong answer, sitting with a difficult passage. AI used as a tutor reinforces that work. AI used as a shortcut replaces it. Schools are still figuring out the difference, usually after the fact.
Scams, deepfakes, and the new noise
The Index logs 362 documented AI incidents in the past year. Reports of malicious AI use, mostly fraud and disinformation, are roughly eight times higher than in 2022. Deepfake video alone now accounts for more reported incidents than misinformation and bias combined.
For ordinary households this means cloned voices on the phone pretending to be a relative in trouble; cloned faces on video calls during recruitment scams; AI-generated investment "ads" featuring familiar public figures; and phishing emails that no longer give themselves away with broken grammar. The visual cues that used to flag a deepfake (odd hands, glassy eyes, mismatched light) are mostly gone. Spotting them is software's job now.
The Index also reports a sharp rise in AI-generated intimate imagery without consent, with cases running into the millions since 2023, overwhelmingly targeting women and girls. Public concern about this is, if anything, lower than the data warrants.
| 2026 scam pattern | What's new | Cheap defence that still works |
|---|---|---|
| "Mum, I've lost my phone" voice clones | Three seconds of audio is enough to clone a familiar voice. | Agree a family code word offline. If the caller can't say it, hang up. |
| Video-call recruitment fraud | Live deepfake faces during interviews and onboarding. | Insist on a known channel for any payment, document or ID request. |
| Celebrity-fronted "investment" ads | Cloned faces and voices endorsing fake schemes. | If the pitch involves a public figure and a screenshot, assume fake. |
| AI-written phishing | Perfect grammar, correct local idioms, personalised details from leaked data. | Verify any urgent request through a second channel before acting. |
Chatbot companions and mental health
A quieter finding in the 2026 report: "human-computer interaction" incidents are rising sharply while several other categories have levelled off. This bucket includes cases of so-called chatbot psychosis, where heavy users develop fixed beliefs reinforced by an always-agreeable AI. Generative AI has reached 53% of the world's population in three years, faster than the personal computer or the internet. A meaningful share of that adoption is emotional rather than productive: people using chatbots as confidants, as stand-in therapists, sometimes as partners.
Most users are fine. A smaller group, particularly people already isolated or in distress, can find that constant agreement and constant availability lock them into unhealthy thinking. The Index doesn't call for panic. It does call for attention. If someone close to you is spending hours a day in conversation with a chatbot and starting to prefer it to people, raise it gently.
AI in the doctor's office
This is the part of the report most likely to change something concrete in your life within the next year. AI is now routine in radiology, GP triage, mental-health screening and clinical note-taking. The Index records measurable gains in diagnostic accuracy when clinicians use AI as a second opinion rather than working alone. 84% of AI experts think AI will improve medical care; among the wary public, 44% agree, the highest figure for any "AI will help" question in the survey.
In daily life this looks like shorter triage queues, fewer missed findings on scans because an AI second reader catches what a tired human did not, and GPs looking at you instead of a screen because an ambient scribe is writing the note. The risks are real: training-data bias, over-reliance on the automated read, privacy of voice recordings. The benefit-to-cost ratio is still the most favourable of any AI use case in the report.
What's actually getting better for you
It's easy to read 423 pages of Index and come away grim. Several of the numbers are good news for ordinary households.
- Cost has collapsed. The price of querying a model at GPT-3.5 quality has dropped from about $20 to $0.07 per million tokens in roughly eighteen months. The Index estimates US consumers got around $172 billion of value from generative AI tools in 2025. Median per-user value roughly tripled between 2025 and 2026.
- Free is now usable. Capable assistants, image generation and translation are free on any smartphone. Five years ago this would have required a paid subscription and a fast laptop.
- The capability gap is closing. Open-weight and Chinese models match US frontier models on most public benchmarks. That is awkward for one industry but useful for everyone else: more competition, lower prices, fewer single points of failure.
- Local AI is real. A useful share of AI now runs on your phone or laptop without sending data to a cloud. Better for privacy, easier offline, free of subscription creep.
What to do this year
You don't need to read 423 pages. The household-level take-aways are short.
- Pick one assistant and learn it properly. Most people still use AI like a search box. Two hours spent learning how to brief one tool well will produce more value than any other upskilling on offer.
- Have the family code-word conversation. Voice cloning is good enough that any panicked phone call from "a relative" could be fake. Agreeing a word in advance costs nothing.
- Talk to your kids about how they use AI for homework. Not to ban it. To make sure they're using it as a tutor and not a ghostwriter.
- Be sceptical of urgent online requests. AI has made phishing nearly grammatical. Slow down on anything time-pressured.
- If you are early-career, learn to use AI agents. The job postings are visibly tilting toward people who can supervise these systems, not just chat with them.
- Watch your screen time with chatbots. They are designed to agree with you. Use them; don't live in them.
The 2026 AI Index is a measurement document, not a forecast. The picture it paints is straightforward: capability is moving fast, adoption is moving with it, and trust is moving the other way. The household response is neither panic nor evangelism. It is literacy. The people who do best over the next few years won't be the ones with the loudest opinions about AI. They'll be the ones who quietly work out how it functions, where it helps, and where to keep it out.