On April 19, 2026, a user on r/antiai wrote a post titled “I caught my child using AI.” Her 9-year-old daughter had been using Google AI off and on for about a week. The post catalogued what the child had been using it for.
She talked to it about how to get along better with her little sisters, how to improve her swimming times after a swim meet, and then she used it to help her write fan fiction plotlines for her favorite book series.
I just had a long conversation with her about it, and she's devastated. She didn't know it had environmental impacts and she now fully understands how sycophantic and insidious it is. She's not in trouble, but she's aware we are not to use that again because we don't want her to lose her creativity.
Read what the child was doing, not what her mother decided it meant. A 9-year-old went to an AI for conflict resolution with her siblings, coaching on her swim times, and creative writing support. Three distinct asks. Three legitimate needs. Three resources most 9-year-olds do not have on tap at home.
Her mother's response was to convene a talk long enough to leave the child devastated, teach her a set of reasons (environmental impact, sycophancy, creativity loss), and extract a promise not to use the tool again. The post ends with the parent presenting the devastation as a successful outcome.
This post went viral on X within hours. It is the cleanest possible specimen of what the term AI Derangement Syndrome was coined to describe: a reflexive, total rejection of AI tools dressed up in a rotating set of reasons, with no serious attempt to weigh the reasons against the concrete, measurable benefit the tool was producing.
The child's three uses are worth examining in isolation, because each one maps onto a known evidence base and each one highlights what was taken from her.
Conflict with her sisters. A 9-year-old who asks an AI how to get along with her siblings is practicing perspective-taking. She is rehearsing language she can use in a real argument. The alternative path for this information is to ask a parent, a teacher, or a counselor. She chose not to ask her parent. That is itself information, and a parent paying attention would treat it as a diagnostic signal rather than as evidence that the AI is the problem.
Swim times. A competitive swimmer at nine is operating inside a niche. Most parents cannot coach stroke mechanics or interval training. Most families cannot afford a private coach on demand. A free AI that will walk through warm-ups, drills, and race-pacing is, in direct dollar terms, the democratization of coaching. The Stanford Tutor CoPilot result is the general pattern: AI lifts the bottom of the quality distribution more than the top.
Fan fiction plotlines. The research on teen creativity and AI points the other way from the parent's fear. The Conversation's reporting on teen AI use found that young people using AI companions transferred skills into the real world, boosting creativity and improving their writing. Writers have always used outlines, prompts, critique partners, and writing-group feedback. Asking an AI for a plot beat is, mechanically, the same request a novelist workshops with peers.
This is the reason the child was given first. It does not survive a rudimentary cost-benefit check.
OpenAI's Sam Altman disclosed in a June 2025 blog post, and again at a February 2026 summit covered by CNBC, that an average ChatGPT query uses about 0.32 ml of water and 0.34 Wh of electricity. Google's disclosed figure for Gemini is 0.26 ml. The figure is not peer-reviewed and “average query” is not precisely defined. Writers who have walked the arithmetic through in public, including Andy Masley and software engineer Sean Goedecke, reach higher independent estimates: roughly 3.5 ml for GPT-4o and about 39 ml for reasoning-heavy GPT-5 queries. The range across disclosures and independent estimates spans two orders of magnitude; even the highest number is small at the per-query level.
A child who used Google AI "off and on for about a week" produced water consumption in the single-digit-liter range at most, and plausibly in the sub-liter range. Her parents run a dishwasher. They flush toilets. They water a lawn. The environmental harm story, told to a 9-year-old to make her feel guilty, is dwarfed by every other activity happening in the same house.
This does not mean aggregate AI water and electricity use is zero. MIT's reporting and UN Environment Programme coverage both point out that projected industry-scale demand is a real policy concern. The derangement move is to translate a valid policy-level concern into an individual guilt trip aimed at a nine-year-old, while ignoring that the same concern, rigorously applied, would forbid most of what the family already does.
This is the most interesting of the three reasons, because it is partly correct. LLMs do have documented sycophancy issues. Models tend to agree with users, validate shaky arguments, and return flattering framings. This is a real alignment problem that labs are actively working on.
The question is what you do about it with your own kid. The correct response is to teach the child to cross-examine the model. Ask it for the counter-argument. Ask it what a coach or a teacher would say differently. Ask it what it might be missing. This is a transferable skill that will serve her in every future interaction with any confident source of information: authority figures, search results, teachers, blog posts, friends.
Instead, the child was told the tool was “insidious.” The word is diagnostic. Insidious is a word adults use to describe corrupting forces they want a child to fear rather than analyze. Sycophancy is a solvable technical limitation. Insidiousness is a moral frame designed to end a conversation.
Derangement runs on category errors. A policy-level concern becomes a personal sin. A technical limitation becomes a moral corruption. A conversation the child is having with a tool becomes a conversation the tool is doing to the child. Each substitution removes the cost-benefit calculation and replaces it with a verdict.
This is the claim that would have been easiest to test. The parent could have asked to see a fan-fiction plot the child had written before she started using the AI, and one she had written with the AI's help. Compare them. Ask whether the newer one is better, worse, or the same. Ask the child which one she is prouder of. Ask which one she remembers more vividly.
The parent did not do this. She relied on a generic prior that AI assistance corrupts creativity, and she applied it before collecting any data from her own child. The current research on teen AI use does not support that prior. Young people using AI companions reported their skills transferred into real-world writing. Professional authors routinely use AI for brainstorming, outlining, and research without ghostwriting the prose. A 9-year-old asking for plot ideas is, structurally, the mildest possible use.
The risk of AI-assisted writing producing shallow or derivative work in children is real and worth watching. It is also observable in a sample size of one, for free, in her notebooks. Declare a ban only after looking at the notebook.
Every rejection of AI is also a choice about what the child does not get. In this case, she does not get:
The family that can afford all three on demand does not have to think about the opportunity cost. Most families cannot. Free AI tutors are one of the few interventions that narrow, rather than widen, educational inequality. The Ghana AI math tutor RCT, the Delhi Mindspark study, and the Nigeria World Bank Copilot evaluation were all deliberately designed around low-resource settings for exactly this reason.
The parent in the post is not trading a tutor for a better tutor. She is trading a tutor for nothing. She is also teaching her daughter that the act of seeking help outside the family is something to feel shame about, which is the most destructive single lesson a parent can embed in a 9-year-old.
Caution is proportional. Caution measures harm and weighs it against benefit. A cautious parent would have asked: what specifically is happening, what is it producing, is the production good or bad, and is any marginal harm worth stopping it for. A cautious parent might have ended up at supervision, at discussion, at using the AI together, or at setting time limits.
Derangement skips the measurement. The reasons are interchangeable and post-hoc. Environmental impact, sycophancy, creativity loss, data privacy, job displacement, authenticity, slop, AI being ugly: the specific reason rotates to fit the audience. The conclusion is always the same, which is the telltale sign of motivated reasoning. A framework that produces the same verdict regardless of the input is not a framework; it is a banner.
The term “AI Derangement Syndrome” was coined by Yardeni Research in a client email, reported by Fortune in February 2026 in the context of equity markets, and has since escaped into broader use. The market version describes investors dumping any stock with AI exposure regardless of fundamentals. The cultural version describes the parent in the post. Both describe the same cognitive pattern: a loud, visible refusal to process new evidence because the refusal itself has become identity.
Anti-AI sentiment is also, separately, a real and growing movement. Data Center Watch's Q2 2025 report, covered by Time, counted 20 projects worth $98 billion in proposed investment stalled by local opposition in a single quarter. Fortune reported in April 2026 on specific violent incidents, including a Molotov cocktail thrown at OpenAI CEO Sam Altman's residence. Pew's June 2025 survey found 50% of US adults feel more concerned than excited about AI in daily life, against 10% more excited than concerned: a five-to-one ratio, up from 37% concerned in 2021. Not all of this is derangement. Some of it is legitimate concern about labor, copyright, privacy, and concentration of power. The distinction is whether the reasoner updates on evidence.
The r/antiai post does not update. It is the specimen.
The viral reach of the post matters because it is not one family. Parenting advice propagates. Each parent who reads the post and decides to extract the same promise from their own child is subtracting one student from a population that already has free access to an intervention the research literature shows produces gains equivalent to a year of schooling.
The children most harmed by this are not the children of families who can afford the human alternatives. They are the children whose parents watched the viral post, heard the reasons repeated enough times to believe them, and told their kid to close the tab. The equity effect runs exactly backwards from the one the critique claims to care about.
The children least harmed are the children of the people building the tools. Those parents see the evidence directly, spend tens of thousands on tutors regardless, and will keep doing both.
The cleanest counter-move a parent can make, if they want to preserve their child's critical thinking and creativity without shutting down access to one of the best tutoring resources ever built, is to use the tool with the child. Ask it hard questions. Catch it being wrong in real time. Ask for counter-arguments. Compare its answer to a textbook. Treat it as one voice among many rather than as a corrupting force to be exorcised. That is the use pattern that produces both the learning gains and the epistemic independence the parent claims to want.
The 9-year-old in the post wanted to get along with her sisters, swim faster, and write better. She had figured out, on her own, where to find help for each one. The adult move is to be curious about the child who did that. The deranged move is to make her cry about it.