←back to thread

294 points ulrischa | 1 comments | | HN request time: 0.208s | source
Show context
nozzlegear ◴[] No.42174177[source]
For anyone who didn't click through to the WebKit bug report the author submitted, a WebKit dev asked him to clarify why the BBC finds it beneficial to be able to detect that the event was sent from a keyboard. This is the author's response:

> Ironically, I want interoperability on this to help with use cases relating to accessibility.

> I work at the BBC and, on our UK website, our navigation bar menu button behaves slightly differently depending on if it is opened with a pointer or keyboard. The click event will always open the menu, but:

> - when opening with a pointer, the focus moves to the menu container.

> - when opening with a keyboard, there is no animation to open the menu and the focus moves to the first link in the menu.

> Often when opening a menu, we don't want a slightly different behaviour around focus and animations depending on if the user 'clicks' with a pointer or keyboard.

> The 'click' event is great when creating user experiences for keyboard users because it is device independent. On keyboards, it is only invoked by Space or Enter key presses. If we were to use the keydown event, we would have to check whether only the the Space or Enter keys were pressed.

Source: https://bugs.webkit.org/show_bug.cgi?id=281430

replies(5): >>42174432 #>>42174435 #>>42174511 #>>42174692 #>>42175176 #
1. Sayrus ◴[] No.42174432[source]
While I can understand the author's need for screenX and screenY, the question remains. Why would screenX return the real screenX position instead of the position within the renderer (I don't think that exists?) or the rendered page (layerX and layerY)? The author's need would be met the same with the renderer position and window positions wouldn't be leaked to all visited websites.