Artificial Intelligence Will Defeat CAPTCHA — How Will We Prove We’re Human Then?
If you use the web for more than just browsing (that's pretty much everyone), chances are you've had your fair share of "CAPTCHA rage," the frustration stemming from trying to discern a marginally legible string of letters aimed at verifying that you are a human. CAPTCHA, which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart," was introduced to the Internet a decade ago and has seen widespread adoption in various forms -- whether using letters, sounds, math equations, or images -- even as complaints about their use continue.
A large-scale Stanford study a few years ago concluded that "CAPTCHAs are often difficult for humans." It has also been reported that around 1 in 5 visitors will leave a website rather than complete a CAPTCHA.
A longstanding belief is that the inconvenience of using CAPTCHAs is the price we all pay for having secured websites. But there's no escaping that CAPTCHAs are becoming harder for humans and easier for artificial intelligence programs to solve.
For example, an app developer named Andrew Munsell put a post recently about his own frustration with reCAPTCHA, Google's own version of the CAPTCHA system, after a few failed login attempts on an account. In the post, he includes a sample of the reCAPTCHAs he was presented (to the right), and many commenters chimed in with their own CAPTCHA moments. It's understandable why this is such a common experience for web users when 280 million CAPTCHAs are solved daily, according to Businessweek.
In theory, CAPTCHAs act effectively as gatekeepers for a system because of two conditions for a test:
(1) a human will pass by correctly identifying symbols or by performing a series of operations
(2) a program will fail because of its inability to recognize symbols or carry out a set of operations
Ensuring that bots are blocked means presenting a test that the software cannot complete. CAPTCHAs are commonly used for user registration, contact forms, or failed logins. These are all points of entry for spamming or phishing a userbase, and bots are created to crawl websites looking for exploits. Sites that use CAPTCHAs typically tolerate a certain failure rate for humans, but may lock out access from an IP if the failed attempts exceed a threshold.
However, the assumption is that though humans may fail to solve some of the tests, bots must always fail; otherwise, the system isn't secure because it can't weed them out. To be effective, CAPTCHAs must present tests that are beyond the capabilities of a bot's optical character recognition (OCR) or require operations that software cannot perform.
Therein lies the problem.
First, a boatload of strategies are posted around the web about how to improve recognition in scripts in order to break CAPTCHAs. These may involve ways to enhance OCR by removing noise, such as those annoying intentionally introduced lines or dots peppered throughout images. Another strategy is to manipulate the characters in an image by rotating, aligning, or warping them -- basically, many of the features that come standard to today's photo editors. Libraries of solved CAPTCHA images have also been collected, thanks to sites around the web that pay people fractions of pennies to solve tons of CAPTCHAs. Amazon Mechanical Turk used to be a popular one, but now a number of independent sites are around, such as Death by CAPTCHA. Clever hacks have even been developed for audio CAPTCHAs that merely deconstruct waveform shapes to identify what numbers are being spoken.
These techniques have been posted not just from professional hackers, but anyone who figures out a way past CAPTCHA and decides to share it. By the way, this isn't always out of malicious motives. Many of these exploits are posted by users who are generally concerned with security and demonstrate the exploit to help the company fix it.
Second, newer approaches to CAPTCHA have been developed that try a different approach to a Turing test by asking users to perform operations through input devices, like a keyboard or mouse. Some are simple in their approach, such as the MotionCAPTCHA project that requires tracing a pattern with the mouse pointer or Capy, a service in beta for touch-based devices. This kind of test may even find its way in an augmented-reality device like Google Glass by tracking eye movements as an image moves across the field of view, as a recently uncovered patent suggests.
A next generation CAPTCHA has also been developed called PlayThru from areyouahuman.com, which presents a mini game requiring image recognition, some reasoning, and mouse operations to complete. The company claims that a PlayThru presents users with five levels of interaction, a significant step up from the OCR of CAPTCHAs. Reportedly, users spend only an average of 10-12 seconds solving a PlayThru compared to 16 seconds for a CAPTCHA. More secure, faster, and an element of fun is definitely an improvement, but already attempts at hacking it have been posted on YouTube and discussed at Hack A Day.
Approaches that rely on operations performed by the user have a fatal flaw beyond the need to recognize what actions are required: they depend on computer input devices controlled by software which can be exploited in the same way that a remote desktop application does. To solve these types of Turing tests, someone will just deconstruct the steps required and program a bot to recognize what it needs to do and perform whatever inputs are required.
Fundamentally, hackers are teaching programs how to think like humans. Anyone creating a CAPTCHA system is playing a game of staying ahead of the curve, meaning they develop methods that bots cannot solve until someone teaches them to. Theoretically, this could go on and on if it wasn't for the fact that the tests are no longer simple, but have become challenging for humans. When the failure rate of humans and the success rate of bots converge, CAPTCHAs will become meaningless. In other words, the "Completely Automated Public Turing test" cannot tell computers and humans apart. We're likely on the cusp of that point.
In one sense then, the collective efforts of hackers combined with companies generating even more sophisticated Turing tests to beat are actually helping to evolve artificial intelligence. So every step backward for CAPTCHA is a step forward for AI.
It isn't hard then to see where this is going. At some point in the near future, it will be very difficult to prove to a computer that we are human and not a bot. And if you think that standard logins that we've grown accustomed too will still be useful in years to come, think again: recent reports about stolen user information from big companies like LinkedIn and Blizzard doesn't bode well for the tried-and-true username/password system. Perhaps eye scanners and blood samples are inevitable, but those are exploitable too (watch Gattaca to see a masterful exploit at work). Truth is, artificial intelligence will find more and more ways to make computers look human until finding the difference between them will be a painstaking process, something akin to the following classic scene from Blade Runner:
Lest you think that because CAPTCHAs will fail they're just a big waste of time, take heart. Google has actually been using reCAPTCHA with an ulterior motive: crowdsourcing the digitization of old newspapers and books. The sketchy reCAPTCHAs you often see are merely poor scans that Google presents so that people can collectively translate the words. Check out the series of blog posts in Techie Buzz to learn more about how reCAPTCHA works. It may help the next time you want to throw your computer out the window because you don't read scribbly.
Latest posts by David J. Hill (see all)
- Why Your Smartphone’s Battery Sucks Is Finally Revealed - June 29, 2016
- What Happens If Society Is Too Slow to Absorb Technological Change? - June 14, 2016
- This Week’s Awesome Stories From Around the Web (Through June 11th) - June 11, 2016