Have you ever walked into a room and forgot why you where there? Or while in the middle of conversation forgot a person’s name? Or briefing your boss on a project, only to stumble because a crucial factoid escaped your mind?
Yeah, me too.
“Tip of the tongue” syndrome haunts us all — that feeling where you’re close to remembering something, but just can’t seem to get there. But what if, at that exact moment, an AI-powered “cognitive assistant” pitches in and delivers that missing piece of information straight into your ear?
That future may soon be here. In a patent published late last year, IBM described a sort of “automatic Google for the mind”: one that monitors your conversations and actions, understands your intentions and offers help only when you need it most.
The brainchild of computational neuroscientist Dr. James Kozloski, a master inventor at IBM Research, the cognitive digital assistant has lofty goals: by acting as an external memory search module, it hopes to help people with memory impairments regain the cognitive ability to navigate through life with minimal help.
For the rest of us? A searchable memory could give us the opportunity to make innovative connections, support brainstorming sessions and help us tackle more problems and think more deeply.
In a recent interview with the Atlantic, Kozloski laid out his plans for a human-AI mind-meld future.
Context Is Key
To understand how an AI cognitive assistant works, we first need to look at why human memory fails.
One reason is context. We excel at memorizing stories — the whats, whos, whens and wheres. When we remember an event, we fit its different components together like a puzzle; because of its linked nature, any component can act as a trigger, fishing out the entire memory from the depths of our minds.
Yet often we have trouble finding the trigger: the memory is there, but we can’t access it. Some current apps — to-do lists, scheduling apps, contact lists — already help us remember by acting as a trigger. But they can’t help someone who needs a reminder to update and use those apps in the first place.
IBM’s cognitive assistant hopes to bridge this gap.
Acting as a model of the user’s memory and behavior, it surveys our conversations, monitors our actions and — using Bayesian inference, a probabilistic algorithm often used in machine learning — predicts what we want, detects when we need help and offers support.
If you’re thinking “whoa, that’s creepy,” you’re not alone.
But according to Kozloski, we are already constantly monitored by our electronic devices. A Fitbit tracks your heart rate and movement, a sweat analyzer checks for dehydration and fatigue, augmented reality devices listen in on your conversations to offer real-time translations and suggest potential replies.
And the future of trackers is only getting more sophisticated and personal.
These data, combined with data from your environment, is then fed into the cognitive assistant. With enough data, the AI can compute a model of what a person is thinking or doing.
By analyzing word sequences and speech patterns, for example, it may detect whether you’re talking in a business setting or with a family member. It could similarly also monitor the words of your conversation partner and, using Bayesian inference, make an educated guess about who he or she is.
If you suddenly experience a word block, the AI would make a note of where the conversation lapsed. Then, using data from your previous speech recordings and the Internet, it could offer up words that you most likely had in mind for that particular context.
The system would work even better if your partner also wears a cognitive assistant device, Kozloski suggests. In that case, the two devices could share data to build a better model of what information you’re trying to access at that very moment.
If all this sounds abstract, here’s an example.
Imagine you’re calling a friend with whom you haven’t talked to recently. From the dial tone or your wrist movement, the cognitive assistant tracks the number that you dialed. From there, it figures out who you’re calling, and crosschecks its database for previous conversations, calendars and photos related to that person.
It then gently reminds you — through an earpiece, speakers or email — that last time you talked, your friend had just begun a new job. By scanning your texts, it notes that several weeks ago she had booked a tattoo appointment — her first! — that was now coming up.
All of this information sits primed and ready — all before your friend picks up — just in case you want a friendly reminder.
How — and if — you want the data delivered is up to you, stresses Kozloski. That’s the thing: the cognitive assistant would only pitch in when you want it to.
“It would be very annoying if it were continually interrupting you,” he said.
The assistant could come with a preset threshold for jumping in. For example, it could detect pauses in your speech or actions, and through machine learning, understand the “tells” of when you’re confused. This data helps the assistant automatically adjust its threshold.
Direct human feedback would also contribute to the assistant’s accuracy, allowing a truly personalized experience.
By catering to the individual’s cadence or idiosyncrasies it could build a better model of what’s normal for the user, and what’s not, Kozloski said.
Personal Care
An obvious application for the assistant would be for people suffering from memory loss.
“In early stages of Alzheimer’s, a person can often perform everyday functions involving memory,” wrote Kozloski in the patent.
As memory loss becomes more severe, the person would begin to experience the devastating results of cognitive breakdown, he explains. They won’t be able to take their medication on time. They might miss important appointments. They may even lose the ability to interact with other people, to dress themselves or cook meals.
In these cases, a cognitive assistant would not only help the user by giving them friendly reminders, it could also monitor the person’s cognitive decline over time.
For example, are they forgetting something more frequently? Is it a memory or a motor task? Is the user straying from his or her usual routine?
The assistant could “perhaps prevent side effects of what are otherwise sort of innocuous episodes of forgetting,” said Kozloski.
Kozloski is careful to address privacy and security issues that could arise from uploading your digital self to the assistant.
“…The invention includes security mechanisms to preserve the privacy of the user or patient. For example, the system can be configured to only share data with certain individuals, or to only access an electronic living will of the patient in order to determine who should have access if the user is no longer capable of communicating this information,“ he writes.
The system may adopt other security measures, but for now Kozloski is focusing on the device itself.
Even if Kozloski’s idea fails, it’s easy to imagine that something similar may take its place. IBM’s cognitive assistant, combined with augmented reality, virtual reality and brain-machine interfaces suggests that we are on the fast track towards a new way of life. It’s a human+machines future.
Image Credit: Shutterstock.com