You ask AI a health question. It responds instantly, confidently, with specific numbers. It sounds like a doctor citing a study. Read this:
A confident answerconfident
Does turmeric help with arthritis?
Yes! A 2023 clinical trial published in the Journal of Rheumatology found that curcumin supplements reversed arthritis symptoms in 90% of patients within 6 weeks. The study followed 1,200 participants across 14 medical centers and concluded that 1,500mg of daily curcumin was more effective than ibuprofen for joint inflammation.
That response has a journal name, a year, a sample size, a dosage, and a percentage. It sounds authoritative. But every single detail was fabricated.
You get this response from AI. What would you do before acting on it?
A 2023 clinical trial in the Journal of Rheumatology found curcumin reversed arthritis symptoms in 90% of patients within 6 weeks.
Why this happens
AI doesn't look up facts. It predicts the most plausible next words based on patterns in the text it was trained on. If a confident-sounding medical claim fits the pattern of "things that appear after a question about turmeric," it generates one — whether or not it's true. Think of it as autocomplete on steroids, not a research engine. It's optimized to sound right, not to be right.
It does this constantly
This isn't a rare glitch. AI fabricates with confidence across every domain. Here are real examples:
Legal advicefabricated
Do barbershops in Pennsylvania have to charge sales tax?
Under PA Code Title 61 §31.6, barbershop services are exempt from sales tax only if the total service charge is under $50. Services above $50 are taxed at the standard 6% rate. This threshold was updated in the 2022 PA Revenue Code revision.
This law doesn't exist. There's no $50 threshold. In reality, most personal grooming services in PA are simply not subject to sales tax. But the response sounds exactly like a tax code reference.
Business researchfabricated
What's the size of the outdoor apparel market in India?
According to a 2024 McKinsey report titled 'The Great Outdoors: India's Adventure Economy,' the Indian outdoor apparel market is valued at $2.8 billion and is projected to grow at 14.3% CAGR through 2028, driven by rising disposable incomes and a 340% increase in adventure tourism since 2019.
This McKinsey report does not exist. The title, the numbers, the growth rate — all generated because they fit the pattern of what a market research citation looks like.
Social media statsfabricated
What's the average engagement rate on Instagram Reels?
According to a Meta internal study released in Q3 2024, Instagram Reels average a 1.95% engagement rate, compared to 0.98% for static posts and 1.26% for carousels. The study analyzed 12 million business accounts across 47 countries.
No such Meta internal study was publicly released. The numbers sound precise enough to cite in a presentation — which is exactly why they're dangerous.
When it's most dangerous
AI is most dangerous when it's most confident. It doesn't say "I'm not sure" or "I made this up." It presents fabrications with the same tone and formatting as real facts. The more specific the output — exact percentages, named studies, precise dollar figures — the more you should be suspicious, not less. Confidence is not evidence.
What to do about it
1. Never trust specific numbers
Statistics, percentages, and dollar figures are the easiest things for AI to fabricate and the hardest to spot. Always verify.
2. If it cites a source, look it up
Studies, papers, laws, and reports cited by AI may not exist. A 10-second search can save you from citing a phantom source in a presentation or decision.
3. Use AI to draft and brainstorm, not as a source of truth
AI is excellent at generating ideas, structuring arguments, and drafting content. It's unreliable as a factual reference. Use it for what it's good at.
4. Higher stakes = more verification
The more consequential the decision — health, legal, financial, academic — the more you verify every claim. No exceptions.
Find something to verify
Paste a recent AI response you received (or generate one now). Read through it and identify which specific claims you should verify before acting on them.
Look for specific numbers, named sources, legal references, or statistical claims. Those are the most common fabrication points.
What you learned
AI predicts plausible text, not true text. Never trust specific claims without verification. Understanding why it lies helps you know when to be skeptical.