I spent the first half of 2025 obsessed with a number: 98%.
That was the "Human Score" I tried to hit on every article I published. I was paying $50 a month for Originality.ai, tweaking adjectives, breaking up sentences, and intentionally inserting grammatical flaws just to get that little green checkmark.
Then I dug into Google’s patent filings, and I realized I was optimizing for a metric that doesn’t exist.
If you are currently sweating because your latest article scored a "64% AI Probability," stop. You are burning budget to solve a problem you don’t have. The reality of SEO in 2025 is counter-intuitive: Google does not care if you use AI. They care if you are boring.
Here is the forensic evidence—and the specific "Hybrid" workflow I use to rank AI-assisted content without triggering the spam filters.
The "Written by People" Retraction
The smoking gun for Google’s stance isn't a leaked memo; it’s a deletion.
For years, Google’s Helpful Content System documentation explicitly stated that they prioritize content "written by people, for people."
In the September 2023 Helpful Content Update, they quietly deleted the first half of that sentence. Go check the Wayback Machine. It now reads: "Content created for people."
This wasn't an accident. It was a tacit admission that in a world of LLMs (Large Language Models), the origin of the text is irrelevant. If Claude 3.5 Sonnet can explain Python decorators more clearly than a tired freelancer on a deadline, Google wants to rank the robot.
The Real Enemy: SpamBrain (Not The English Teacher)
To understand why your "98% Human" score is vanity, you have to understand the difference between what you think Google does and what they actually do.
Third-party tools (Originality, Turnitin) use Forensic Detection. They measure Perplexity (randomness) and Burstiness (sentence variation). They are trying to answer: Did a machine generate this syntax?
Google doesn't have time for syntax police. They run SpamBrain, an AI system launched in 2018 and massively upgraded in 2022. SpamBrain is not checking if you used ChatGPT. It is checking for Scaled Abuse Patterns.
According to Google’s 2022 Webspam Report, SpamBrain looks for:
- Scale: Did this domain publish 500 pages in 48 hours?
- Stitching: Is the content just a Frankenstein monster of the top 3 search results?
- Gibberish: Does the text actually answer the query?
I have seen 100% AI-written articles rank #1 because they were accurate and helpful. I have seen 100% human-written articles deindexed because they were rambling fluff.
The Metric That Actually Matters: "Information Gain"
This is the coolest part of the research. In 2020, Google filed a patent (US20200349181A1) for something called "Contextual estimation of link information gain."
Information Gain is the anti-spam metric.
- Low Gain: You wrote an article about "Best CRM Software" that repeats the exact same features list as the other 10 articles on Page 1.
- High Gain: You wrote the same article, but you included a table comparing their API latency speeds—data that nobody else has.
If your Information Gain is high, Google doesn't care if a robot wrote the sentences.
My "Hybrid" Workflow: How to Actually Rank
So, if we ignore the detectors, how do we ensure safety? We don't try to trick the algorithm; we feed it "High Gain" signals.
I use a strict protocol I call the "Experience Injection." It satisfies Google's E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) without lying about AI usage.
1. The "Un-fakeable" Evidence Rule
AI cannot taste food, unbox a gadget, or log into a dashboard. It can only hallucinate about those things.
My Rule: Every article I publish must contain at least one piece of media that a robot could not physically generate.
In Practice: I don't just write about an SEO tool; I take a screenshot of the settings panel with my specific error logs visible. Google’s vision algorithms (Cloud Vision API) can identify unique screenshots versus stock images. Unique media = Human Experience.
2. The "Data-First" Prompting Strategy
Most people use ChatGPT like this: "Write an article about X."
This guarantees Zero Information Gain because the AI will just average out its training data.
I use the "Formatter" approach:
- I do the research: I find a new statistic, I interview an expert, or I run a test.
- I feed the data: "Here is a transcript of an interview I did with a python developer. Use these quotes to write a section on async/await."
- The Result: The prose is AI (clean, structured), but the insight is human.
3. The "Sandwich" Editing Method
I never publish raw output. It’s too "flat." I use the Sandwich Method:
- The Top Slice (Human): I write the Intro and the Hook manually. I need to grab the reader’s emotion immediately. AI is terrible at empathy.
- The Meat (AI): The definitions, the "how-to" steps, the comparison tables. AI is better than me at structure. I let it handle the boring stuff.
- The Bottom Slice (Human): The "Verdict." I give a strong, opinionated recommendation. AI loves to say "It depends on your needs." I say, "Buy this one, it's better."
The Verdict
Stop obsessing over whether your content looks like it was written by a human. Start obsessing over whether it helps a human.
If you spend 3 hours rewriting sentences to trick a detector, you have improved the article for a piece of software. If you spend those 3 hours adding unique screenshots, expert quotes, and rigorous formatting, you have improved the article for the user.
And ironically, that’s the only "bypass" that actually works.