updates @ m.blog

Spambots and Accessibility

A new Scientific American article talks about CAPTCHA (“completely automated public Turing test to tell computers and humans apart”) systems being used to prevent spambots from accessing certain web data. Currently, the systems typically work by providing a graphic image of an distorted word on a patterned background, and asks the user to type the word they see into a form field. While the human brain is quite adept at interpreting these images, they would be extremely difficult for a computer algorithm to decipher it. Recently I’ve seen these systems used quite often in message board registration and even on some weblogs (to prevent commentspam).

Unfortunately, the article focuses strictly on how well these systems work at preventing robots from accessing a page, while ignoring the salient issue of how often the systems also prevent some humans from being able to access the page. The problem is these systems also bar access to those that are visually impaired–after all, embedding an alt property in the image would defeat the intended purpose. Why is this a bad idea? 4% of the populace in the US is visually impaired, 10% of men are color-blind… what percentage of the visitors to your website do you think are malicious bots?

Instead of focusing on making CAPTCHA systems that can reliably block 99.9% of robots, a more pressing issue should be making CAPTCHA systems that can reliably deny access to 0.0% of humans.

(There is also the basic issue of how often and on how many separate sites you can expect users to respond to a challenge-authentication mechanisms without getting simply fed-up. The popularity of these systems is still quite nascent, but I’m already beginning to get annoyed with the interruption in my normal flow of web browsing.)

Comments