Most teams measure pageviews. Pageviews tell you nothing about whether the docs worked.
A page with 10,000 views and a 90% bounce rate in under 8 seconds is not a successful page. It's a page people rage-clicked away from. The signal you want is not "did they arrive" but "did they get their answer and then do something." That requires different metrics, named and measured deliberately.
Here is the shortlist the rest of this post will expand on. I call it the Five Signals Framework, and these are the only five numbers worth putting on a docs dashboard:
- Time-to-Answer (TTA): how long from page load to the reader finding what they came for
- Zero-Result Rate (ZRR): percentage of on-site searches that return nothing
- Deflection Delta: drop in support tickets on a topic after the docs for it ship
- Page-to-Action (P2A): percentage of doc readers who then take a product action
- Link Decay: 404s as a percentage of total internal links
39% of teams don't track documentation metrics at all (State of Docs Report 2025). Of the 61% who do, most track the wrong ones. The framework below replaces the vanity stack with five signals that each map to a decision you can act on.
Signal 1: Time-to-Answer (TTA)
Time-to-Answer is the gap between a reader loading a docs page and finding the specific thing they needed. You can't measure it directly without an exit survey, so you use a proxy: session duration on the page divided by a threshold.
How to measure: In Google Analytics or Plausible, segment docs traffic and calculate the percentage of sessions between 15 seconds and 3 minutes on any single page. Under 15 seconds is a bounce, over 3 minutes is probably confusion. The band in the middle is your "answered" cohort. Track it per page.
What "good" looks like: 55-70% of sessions in the 15-second-to-3-minute band on reference pages. Higher on conceptual pages (people read them), lower on lookups (people grab and go). If a page sits at 20%, either the answer is buried or the page is the wrong page for the query driving traffic to it.
The deeper move: install a one-question widget ("Did this page answer your question?") on the top 10 pages by traffic. Two months of that data beats six months of session-duration guessing. The documentation strategy that follows reader behavior starts here. Measure the hit rate of the pages you already have before writing new ones.
Signal 2: Zero-Result Rate (ZRR)
Every docs site with a search bar logs the queries. Most teams never look at them. The single most useful spreadsheet in your content ops pipeline is a weekly dump of searches that returned zero results.
How to measure: If you're on Algolia DocSearch, Typesense, or a self-hosted Lunr index, enable query logging. Count the percentage of searches per week where the results array is empty. You want that number below 8%. Above 15% and your users are running into a content wall often enough to leave.
What "good" looks like: Under 8% weekly ZRR on a mature site. Under 15% on a new one. Every zero-result search is a content brief. If 47 people searched "rate limits" last week and got nothing, your next doc is not the one on your roadmap, it's the one on rate limits.
This is why the case against search bars on small docs sites is wrong for teams past ~50 pages. The bar is not just a navigation aid, it is the cheapest user research instrument you'll ever deploy. Free teams on Docsio get basic visitor analytics for the last 7 days; Pro unlocks 30 and 90-day windows, which is usually when ZRR trends become readable. On any platform, if your search logs aren't being reviewed weekly, you're throwing away your most actionable signal.
Signal 3: Deflection Delta
This is the metric that gets docs funded. Deflection Delta measures the drop in support tickets on a specific topic in the 30 days after a relevant doc ships, compared to the 30 days before.
How to measure: In your support tool (Intercom, Zendesk, HelpScout), tag tickets by topic. When you ship a new doc, note the date. Thirty days later, pull ticket volume for that tag before vs. after. The percentage change is your Deflection Delta. Teams with strong docs regularly see 25-40% drops on specific topics; AI-supported deflection on top of well-structured docs pushes that to 60%+ (Pylon, 2025).
What "good" looks like: Any negative delta is good. A 20%+ drop on a topic is a clear win and the number you use when asking for headcount. Zero or positive delta (tickets went up) means the doc didn't answer the real question; go read the tickets and rewrite.
The mistake teams make is treating deflection as a site-wide number. Site-wide averages hide the wins and the losses. Measure per topic, per doc, per month. One doc deflecting 80 tickets a month is worth more than ten docs deflecting five each combined, and you can only see that at the topic grain.
Signal 4: Page-to-Action (P2A)
Docs aren't a reading exercise, they're a conversion surface. Page-to-Action measures the percentage of docs readers who then take a product action: sign up, complete onboarding, install the SDK, fire the first API call.
How to measure: Add UTM parameters or a simple referrer attribute on every outbound link from docs to product. In your product analytics tool (PostHog, Mixpanel, Amplitude), create a funnel: "docs page viewed" to "action completed within 24 hours." Calculate the ratio.
What "good" looks like: 3-8% P2A on top-of-funnel pages (quickstarts, overviews). 15-25% on pages explicitly positioned as next steps (install page, first API call tutorial). If a quickstart has under 2% P2A, the gap is usually not the product, it's the doc. Readers finish the page confused about what to do next.
This is also where docs-driven development pays back. Teams who write the doc before the feature get higher P2A because the doc explains the feature in the order a new user actually needs it, not the order the engineer built it in.
Signal 5: Link Decay
The boring one that costs you the most trust. Link Decay is the count of 404s divided by the total number of internal links on your docs site. Every broken link is a small stab at reader confidence, and at scale they compound into "this product feels unmaintained."
How to measure: Run a weekly crawl with Screaming Frog, Sitebulb, or a free tool like Lychee (open source). Count broken internal links, divide by total internal links, multiply by 100. Crawls take under 10 minutes for most sites.
What "good" looks like: Under 0.5% decay rate. Anything above 2% is a signal your content ops has broken down: people are deleting pages, renaming URLs, or shipping new docs that reference moved pages, and nobody is catching it. The fix is a pre-merge link check in CI, not a quarterly cleanup sprint.
Link Decay also affects the other four signals upstream. A broken link inside a popular page throws off TTA (people hit the link, then a 404, then bounce) and tanks P2A (the signup link is dead). Fix decay first, then measure the rest.
Putting the framework together
The Five Signals Framework is deliberately small. Five numbers, each with a clear measurement method, each mapped to a decision. A dashboard with TTA, ZRR, Deflection Delta, P2A, and Link Decay tells you what to write next, what to rewrite, and what to delete. A dashboard with 30 metrics tells you nothing because nobody looks at it.
The working order, if you're starting from zero:
- Install search query logging this week. ZRR is the cheapest signal to capture and the one with the clearest next action.
- Run a link crawl next week. Fix everything above 2%.
- Tag tickets by topic and start tracking Deflection Delta on your next five doc ships.
- Add session-duration segmenting for TTA after one month of clean data.
- Wire up P2A attribution last, because it requires the other four to be clean to interpret.
Most docs measurement projects fail because teams try to stand up all five on day one and drown in instrumentation. Pick one signal per month. By month five you have a real dashboard and a real answer to "are the docs working," which is the question that gets docs funded the next time anyone asks.
Related reading: how to organize documentation for the structure that makes these signals meaningful, plus why developers don't read documentation for the behavioral context behind low TTA scores.
