←back to thread

125 points robin_reala | 2 comments | | HN request time: 0.001s | source
1. jillesvangurp ◴[] No.46204423[source]
I've looked a few times at this space. Most of the documentation seems to devolve into a depth first deep dive into here's a bunch of tags and attributes without laying out a coherent goal of what you should be doing.

My observations:

- most web developers have never seen a screen reader version of their application; or any application.

- most teams don't have visually handicapped people that use a screen reader that could provide feedback

- so, no bugs ever get reported regarding accessibility

That is, unless developers go out of their way to use proper tools and do proper testing for this. And testing practices for Aria probably is at the same level as it is for other application features: sketchy to non existent at best.

Let's face it, mostly Aria is pure box ticking for developers. There has to be some (regulations, and PMs insisting because of that). But it doesn't have to be good since nobody really checks these things. Including the PM.

Without a feedback loop, it's not surprising that most web apps don't get this even close to right. IMHO, over time, agentic tools might actually be more helpful to blind people as they can summarize, describe, and abstract what's on the screen. Agentic testing via a screen reader might also become a thing. I've done some testing via the agent mode in chat gpt and it was shockingly good at figuring out our UI. Not a bad process for automating what used to be manual QA. I've been meaning to put more time in this.

I actually have as a very low priority target to start driving some of this in our own application. Mostly that's just a hunch it might come up because of some government customers. But this is Germany and they seem to have lots of blind spots on software quality. I don't actually expect any feedback whatsoever from actual customers or anyone on this. I just want to pre-empt that.

replies(1): >>46205671 #
2. scroot ◴[] No.46205671[source]
For those of us in civic tech, this is a higher priority. At the federal level especially, there are laws on the books about website accessibility compliance -- even if, at present, the powers that be are ignoring them.

I have found the Web Content Accessibility Guidelines [1] to be especially useful when thinking through what I need to implement and why.

It is _impossible_ to thoroughly test any of your accessibility concerns only with automated tooling. You will need to have an experienced user of screenreaders go through your site, especially if it contains complex JS enabled controls and other dynamic updates. This is because the habits of individual users and the combination of a particular screenreader application / browser can often produce different results. It's important to know what the common "patterns of use" are for, say, a JAWS user vs a VoiceOver user.

Last thing I'll recommend is that if you are testing a11y yourself on a Mac using VoiceOver, do all your a11y testing in Safari. In our research, most VO users on Macs/iOS use Safari because it has the best screenreader integration on that platform, and other browsers miss basic things.

[1] https://www.w3.org/WAI/standards-guidelines/wcag/