- Tim at Penfriend
- Posts
- How to win in a 10-slot world
How to win in a 10-slot world
Page 2 of Google is on it's way out. And we should be thinking differently.

Day 214/100
Hey—It's Tim.
I swear coffee and search console are the two biggest constants in my life right now.
Here are the search stories I’m following.
The ten-result era
Google’s now showing ten results per query.
Not “about 100.”
Ten.
That means the “position 27 on page three” feedback loop we all quietly relied on?
Gone.
Fewer slots.
Harsher cutoffs.
Way less signal about whether you’re “nearly there” or “nowhere near.”
The collateral damage
Rank trackers were built for a 100-slot world.
Sampling, pagination assumptions, visibility models—shot in the legs.
Your dashboard might still say “avg. position 18.”
But in a ten-slot SERP, “18” is the friend who promises they’re on their way and never shows.
For me, this breaks a favorite move:
Publish → see the first index page → gauge velocity → decide if we double down or re-angle.
That telemetry is mush now.
The LLM ranking news you can use
Two big drops this month on how ChatGPT and Perplexity pick winners.
ChatGPT’s playbook got clearer.
Metehan dug into live config and found a named reranker (ret-rr-skysight-v3
), intent filters, and a “freshness scoring profile” switched on. Translation: recency and clear task-fit matter, and a neural reranker reshuffles beyond raw retrieval. There’s also evidence of Reciprocal Rank Fusion (RRF) in the code path - multiple query variants fused into one rank.
Perplexity patterns got mapped.
He also published “59+ ranking patterns,” including an L3 entity reranker, request-level factors, and a manually curated set of authoritative domains that get inherent boosts. One killer line for the skeptics: when his page wasn’t indexed by Google, his Perplexity experiments failed. Indexing still gates LLM visibility.
Why this matters to you
If ChatGPT fuses multiple queries, clusters beat single-page heroes. You want consistent mid-top placements across a family, not one #1 and four invisibles.
If freshness is weighted, stale evergreen quietly loses to newer, tighter explainers. Ship updates with visible dates and changelogs.
If Perplexity has curated “trusted domains,” build citations and collaborations that intersect those ecosystems (docs, code, research, pro tools) so your pages inherit trust.
What this looks like in practice
Pick one job and publish a cluster spine: explainer → decision guide → setup guide → troubleshooting. Each should answer a different search intent cleanly in the first screen. Then check if you surface across several phrasings in ChatGPT web answers—don’t chase just one keyword.
Add a monthly freshness sprint: update the top five money pages with new screenshots, dated proofs, and a short “what changed” block.
For Perplexity, add source-rich sections (standards, specs, docs, GitHub issues, whitepapers) and link out with author names. You’re signaling entity depth to that L3 reranker and tapping into the curated domain graph.
If you’ve got zero LLM citations today, don’t rage-quit SEO. These drops basically say: topical breadth, fresh proof, and real sources still win - plus, Google indexing is still the ignition key.
The countertrend nobody wants to hear
“SEO is dead” is fashionable.
Here’s the uncomfortable counterpoint I keep bumping into while poking at ChatGPT, Claude, and Perplexity (yes, I’ve been trying to jailbreak a few models to spill more than they should):
If you don’t already index well in Google,
you rarely get cited in LLM answers.
No index, no snippet.
No snippet, no cite.
No cite, no brand tailwind from answer engines.
AI visibility still orbits the gravitational mass of Google’s index.
That may change one day.
But for today, Google is the sun.
TL;DR - ChatGPT now fuses multiple query variants (RRF) with a neural reranker and a freshness bias, so clusters + recent updates beat one “perfect” page. Perplexity boosts entity-rich sources and trusted domains, so cite real docs, name authors, and build authority paths. And none of this matters if you’re not indexed in Google - index first, then chase LLM citations.
The new search posture
Old way: track pages one through ten, play for incremental lift.
New way: play for trust signals that cross the harsher threshold.
Think entity completeness over keyword sprinkling.
Think query families over single phrases.
Think “be the best explainer of the job” over “be the most optimized instance of the string.”
Penfriend-wise, I’m shifting our tests:
Less “did we move from #34 → 22?”
More “did we earn impressions on the main article and get pinned anywhere that matters?”
See you tomorrow.

✌️ Tim "Zero Click Survivor" Hanson
CMO @Penfriend.ai
Same brain, different platforms: X, Threads, LinkedIn.
P.S. Freshness tripwire: Add a tiny “What changed - Sep 20, 2025” box above the fold, then ask ChatGPT: “List the best guides to [job], updated after Sep 2025.” If it doesn’t say your name back, you’re not fresh enough.

Penfriend.ai
Made by content marketers. Used by better ones.
What to do next
Share This Update: Know someone who’d benefit? Forward this newsletter to your content team.
Get your First 3 Articles FREE EVERY MONTH! We just dropped the biggest update we’ve ever done to Penfriend a few weeks ago. Tone matching with Echo, Hub and Spoke models with Clusters, and BoFu posts.
Let Us Do It For You: We have a DFY service where we build out your next 150 articles. Let us handle your 2025 content strategy for you.