Your website just passed automated accessibility testing with a perfect 100. Your team celebrates. Your compliance officer marks it as resolved. And none of this matters because you’re still locking out millions of people from using your product.
This is the reality of relying on automated accessibility tools, which catch approximately 30% of accessibility issues – and that 30% represents only a small portion of what needs to be assessed. A perfect score tells you that you’ve addressed at most 30% of possible issues, not that your site is actually accessible. The unfortunate truth is they’re fundamentally limited by what can be measured programmatically.
Over 20 years of accessibility audits, I’ve seen this pattern repeatedly: organizations focus on automated testing only, without performing the necessary manual testing to truly ensure conformance. Without fully testing and remediating the site, blind users struggle with broken screen reader interactions, users with cognitive disabilities can’t understand confusing navigation, and motor-impaired users encounter components that require precision clicking. The gap between automated compliance and real-world accessibility is a critical gap in how most organizations approach this work.
What Automated Tools Actually Measure
To be clear, automated testing is awesome in its consistency and repeatability. Automated accessibility testing succeeds at catching structural problems that follow consistent rules. These tools excel at identifying missing alt text on images, detecting low color contrast ratios, finding form fields without labels, and spotting empty heading or button text. The WCAG criteria these tools evaluate are objective, measurable, and consistent across every page. The things automated tools can detect represent accessibility’s easiest problems to solve.
Consider color contrast. A tool can measure the RGB values of text and background colors, calculate the contrast ratio, and report whether it meets WCAG AA standards (4.5:1 for normal text). This is genuinely important, because low contrast does harm people with color blindness and low vision. But it’s also just one variable in whether someone can actually read your content. The automated test can’t measure font size in relation to contrast, can’t assess whether your color palette creates perceptual groupings that help or confuse navigation, and can’t understand whether the contrast meets user expectations.
This pattern repeats across every category automated tools evaluate. They measure presence but not quality. They test individual elements but not interactions. They verify static properties but not user workflows.
The 70% Gap: What Gets Missed
The inaccessible experiences that impact users most often fall outside what automated tools can detect. These typically involve:
Keyboard Navigation and Interactive Components. A tool can verify that form inputs have labels. It cannot verify that you can reach all interactive elements by pressing Tab, that focus indicators are visible & obvious, or that complex widgets like date pickers, autocomplete fields, or modal dialogs work correctly with keyboard-only navigation. Keyboard testing requires human interaction and requires someone who understands how keyboard users actually navigate web pages.
Screen Reader Interactions and Announcements. Automated tools can verify that a form element has a label or that alt text exists for an image. They cannot verify that the alt text is appropriate or that screen readers announce content in a meaningful sequence. I’ve audited sites where many images had alt text that technically met WCAG standards but was completely useless because it was either overly verbose, repetitive, or provided information that screen reader users didn’t need, all because they were chasing a passing score with an automated tool.
Cognitive and Language Accessibility. Can users with dyslexia adjust text spacing and line height? Is your language clear and plain, or filled with jargon and unnecessary complexity? Do users with cognitive disabilities understand your interface patterns? Automated tools have no reliable, standardized mechanism to measure readability or cognitive load. This is almost always entirely absent from automated testing despite affecting a significant portion of the population.
Sensory and Motor Accessibility. Target sizes matter for mouse users and touch users alike. Automated tools can flag buttons that are impossibly small. But they cannot detect, when animations distract from usability, or when gestures required to use your interface exclude people with limited motor control. They also cannot detect when visual indicators are the only way to communicate important states or changes.
Context and User Flows. Can a screen reader user complete a purchase? Can someone using voice control navigate your form? Can a user with a motor disability successfully use your search function? These questions involve testing actual tasks in realistic conditions, something that requires human judgment and domain knowledge about how different assistive technologies work.
The gap between what automated tools measure and what actually matters is not a minor oversight—it’s structural. Automated testing measures the scaffold, not the building.
Why Manual Testing with Real Assistive Technology Matters
Manual testing with actual screen readers, voice control software, and other assistive technologies reveals problems that no automated tool ever will. This is the only way to know whether your site is genuinely usable.
When we conduct accessibility audits, we test with NVDA and JAWS on Windows, VoiceOver on macOS, VoiceOver on iOS, and TalkBack on Android. These are real tools that real people use, and they behave differently from each other. We test with keyboard-only navigation, voice control, and zoom functionality. In fact, we subject every component in an audit against a checklist of over 260 items aimed at ensuring complete coverage of WCAG 2.2.
The results are consistently surprising to teams who achieved perfect automated scores. We find forms that announce labels but position them in ways that confuse voice control users. We find mobile interfaces where touch targets are technically large enough but positioned so that they’re difficult to tap accurately. We find dynamic content that updates without warning, disrupting screen reader workflows. We find navigation structures that are technically valid but confusing for people who rely on landmarks to navigate.
The Real Cost of Incomplete Testing
Organizations that stop at automated testing and declare victory often discover accessibility problems only when users report them. Complaints about inaccessible websites come from real people struggling to accomplish real tasks, and this is something that always carries reputational and sometimes legal consequences. It’s also preventable.
The cost differential matters too. A comprehensive accessibility audit that includes manual testing costs significantly less than defending a lawsuit, rebuilding components that were implemented without understanding assistive technology constraints, or addressing accessibility issues after they’ve caused customer frustration. We’ve seen teams rebuild features multiple times because the initial implementation didn’t account for how screen readers announce updates or how voice control interprets button labels.
Automated testing is a starting point, not a destination. Perfect automated scores are baseline hygiene and something that should be maintained. They’re necessary but nowhere near sufficient.
Building a Complete Testing Strategy
Genuine accessibility requires both automated and manual testing. Start with tools to catch basic structural problems. Use them in your development pipeline to prevent regressions. But don’t stop there.
Lighthouse scores and perfect automated testing results create false confidence. You’re measuring compliance with technical standards, not whether disabled people can actually use your website. The 70% of issues that automated tools miss are often the ones that matter most to actual users.
The solution is straightforward: use automated tools early and often, especially during your normal, ongoing functional testing. But also make manual testing with real assistive technology the core of your accessibility strategy. This is how you move from compliant to genuinely accessible.
Your site’s next 100 score should be celebrated only after you’ve tested with screen readers, keyboards, and voice control. That’s when you actually know whether you’ve built something accessible.
A perfect automated score doesn’t tell the whole story. At AFixt, we combine automated testing with comprehensive manual audits using real assistive technology. Our accessibility audits identify both the obvious problems and the 70% that tools miss—then we help you fix them in order of actual user impact. Whether you’re starting fresh or addressing gaps in your current accessibility, our team of accessibility specialists has the expertise to build an inclusive product.
Learn about our accessibility audit services to understand exactly what your site is missing. Let’s move beyond compliance scores to genuine accessibility.


