Inside OpenAI: Will Transparency Protect Us From Artificial Intelligence Run Amok?

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

OpenAI began with the big picture in mind: in 100 years, what will AI be able to achieve, and should we be worried? If left in the hands of giant, for-profit tech companies such as Google, Facebook and Apple, all of whom have readily invested in developing their own AI systems in the last few years, could AI — and future superintelligent systems— hit a breaking point and spiral out of control? Could AI be commandeered by governments to monitor and control their citizens? Could it, as Elon Musk warned earlier this year, ultimately destroy humankind?

Since its initial conception earlier this year, OpenAI has surgically snipped the cream of the crop in the field of deep learning to assemble its team. Among its top young talent is Andrej Karpathy, a PhD candidate at Stanford whose resume includes internships at Google and DeepMind, the secretive London-based AI company that Google bought in 2014.

Last Tuesday, I sat down with Andrej to chat about OpenAI’s ethos and vision, its initial steps and focus, as well as the future of AI and superintelligence. The interview has been condensed and edited for clarity.


How did OpenAI come about?

Earlier this year, Greg [Brockman], who used to be the CTO of Stripe, left the company looking to do something a bit different. He has a long-lasting interest in AI so he was asking around, toying with the idea of a research-focused AI startup. He reached out to the field and got the names of people who’re doing good work and ended up rounding us up.

At the same time, Sam [Altman] from YC became extremely interested in this as well. One way that YC is encouraging innovation is as a startup accelerator; another is through research labs. So, Sam recently opened YC Research, which is an umbrella research organization, and OpenAI is, or will become, one of the labs.

As for Elon — obviously he has had concerns over AI for a while, and after many conversations, he jumped onboard OpenAI in hopes to help AI develop in a beneficial and safe way.

How much influence will the funders have on how OpenAI does its research?

We’re still at very early stages so I’m not sure how this will work out. Elon said he’d like to work with us roughly once a week. My impression is that he doesn’t intend to come in and tell us what to do — our first interactions were more along the lines of “let me know in what way I can be helpful.” I felt a similar attitude from Sam and others.

AI has been making leaps recently, with contributions from academia, big tech companies and clever startups. What can OpenAI hope to achieve by putting you guys together in the same room that you can’t do now as a distributed network?

I’m a huge believer in putting people physically together in the same spot and having them talk. The concept of a network of people collaborating across institutions would be much less efficient, especially if they all have slightly different incentives and goals.

More abstractly, in terms of advancing AI as a technology, what can OpenAI do that current research institutions, companies or deep learning as a field can’t?

how-to-prevent-evil-ai-9A lot of it comes from OpenAI as a non-profit. What’s happening now in AI is that you have a very limited number of research labs and large companies, such as Google, which are hiring a lot of researchers doing groundbreaking work. Now suppose AI could one day become — for lack of a better word — dangerous, or used dangerously by people. It’s not clear that you would want a big for-profit company to have a huge lead, or even a monopoly over the research. It is primarily an issue of incentives, and the fact that they are not necessarily aligned with what is good for humanity. We are baking that into our DNA from the start.

Also, there are some benefits of being a non-profit that I didn’t really appreciate until now. People are actually reaching out and saying “we want to help”; you don’t get this in companies; it’s unthinkable. We’re getting emails from dozens of places — people offering to help, offering their services, to collaborate, offering GPU power. People are very willing to engage with you, and in the end, it will propel our research forward, as well as AI as a field.

OpenAI seems to be built on the big picture how will AI benefit humanity, and how it may eventually destroy us all. Elon has repeatedly warned against unmonitored AI development. In your opinion, is AI a threat?

When Elon talks about the future, he talks about scales of tens or hundreds of years from now, not 5 or 10 years that most people think about. I don’t see AI as a threat over the next 5 or 10 years, other than those you might expect from more reliance on automation; but if we’re looking at humanity already populating Mars (that far in the future), then I have much more uncertainty, and sure, AI might develop in ways that could pose serious challenges.

how-to-prevent-evil-ai-5I think that saying AI will destroy humanity is out there on a five-year horizon; but if we’re looking at humanity already populating Mars (that far in the future), then yeah AI could be a serious problem.

One thing we do see is that a lot of progress is happening very fast. For example, computer vision has undergone a complete transformation — papers from more than three years ago now look foreign in face of recent approaches. So when we zoom out further over decades I think I have a fairly wide distribution over where we could be. So say there is a 1% chance of something crazy and groundbreaking happening. When you additionally multiply that by the utility of a few for-profit companies having monopoly over this tech, then yes that starts to sound scary.

Do you think we should put restraints on AI research to assure safety?

No, not top-down, at least right now. In general I think it’s a safer route to have more AI experts who have a shared awareness of the work in the field. Opening up research like what OpenAI wants to do, rather than having commercial entities having monopoly over results for intellectual property purposes, is perhaps a good way to go.

True, but recently for-profit companies are releasing their technology as well I’m thinking Google’s TensorFlow and Facebook’s Torch. In this sense how does OpenAI differ in its “open research” approach?

So when you say “releasing” there are a few things that need clarification. First Facebook did not release Torch; Torch is a library that’s been around for several years now. Facebook has committed to Torch and is improving on it. So has DeepMind.

how-to-prevent-evil-ai-7But TensorFlow and Torch are just tiny specks of their research — they are tools that can help others do research well, but they’re not actual results that others can build upon.

Still, it is true that many of these industrial labs have recently established a good track record of publishing research results, partly because a large number of people on the inside are from academia. Still, there is a veil of secrecy surrounding a large portion of the work, and not everything makes it out. In the end, companies don’t really have very strong incentives to share.

OpenAI, on the other hand, encourages us to publish, to engage the public and academia, to Tweet, to blog. I’ve gotten into trouble in the past for sharing a bit too much from inside companies, so I personally really, really enjoy the freedom.

What if OpenAI comes up with a potentially game-changing algorithm that could lead to superintelligence? Wouldn’t a fully open ecosystem increase the risk of abusing the technology?

In a sense it’s kind of like CRISPR. CRISPR is a huge leap for genome editing that’s been around for only a few years, but has great potential for benefiting — and hurting — humankind. Because of these ethical issues there was a recent conference on it in DC to discuss how we should go forward with it as a society.

If something like that happens in AI during the course of OpenAI’s research — well, we’d have to talk about it. We are not obligated to share everything — in that sense the name of the company is a misnomer — but the spirit of the company is that we do by default.

In the end, if there is a small chance of something crazy happening in AI research, everything else being equal, do you want these advances to be made inside a commercial company, especially one that has monopoly on the research, or do you want this to happen within a non-profit?

We have this philosophy embedded in our DNA from the start that we are mindful of how AI develops, rather than just [a focus on] maximizing profit.

In that case, is OpenAI comfortable being the gatekeeper, so to speak? You’re heavily influencing how the field is going to go and where it’s going.

It’s a lot of responsibility. It’s a “lesser evil” argument; I think it’s still bad. But we’re not the only ones “controlling” the field — because of our open nature we welcome and encourage others to join in on the discussion. Also, what’s the alternative? In a way a non-profit, with sharing and safety in its DNA, is the best option for the field and the utility of the field.

Also, AI is not the only field to worry about — I think bio is a far more pressing domain in terms of destroying the world [laugh]!

In terms of hiring — OpenAI is competing against giant tech companies in the Silicon Valley. How is the company planning on attracting top AI researchers?

We have perks [laugh].

But in all seriousness, I think the company’s mission and team members are enough. We’re currently actively hiring people, and so far have no trouble getting people excited about joining us. In several ways OpenAI combines the best of academia and the startup world, and being a non-profit we have the moral high ground, which is nice [laugh].

The team, especially, is a super strong, super tight team and that is a large part of the draw.

Take some rising superstars in the field — myself not included — put them together and you get OpenAI. I joined mainly because I heard about who else is on the team. In a way, that’s the most shocking part; a friend of mine described it as “storming the temple.” Greg came in from nowhere and scooped up the top people to do something great and make something new.

hub-viral-hits-2015-1Now that OpenAI has a rockstar team of scientists, what’s your strategy for developing AI? Are you getting vast amounts of data from Elon? What problems are you tackling first?

So we’re really still trying to figure a lot of this out. We are trying to approach this with a combination of bottom up and top down thinking. Bottom up are the various papers and ideas we might want to work on. Top down is doing so in a way that adds up. We’re currently in the process of thinking this through.

For example, I just submitted one vision research proposal draft today, actually [laugh]. We’re putting a few of them together. Also it’s worth pointing out that we’re not currently actively working on AI safety. A lot of the research we currently have in mind looks conventional. In terms of general vision and philosophy I think we’re most similar to DeepMind.

We might be able to at some point take advantage of data from Elon or YC companies, but for now we also think we can go quite far making our own datasets, or working with existing public datasets that we can work on in sync with the rest of academia.

Would OpenAI ever consider going into hardware, since sensors are a main way of interacting with the environment?

So, yes we are interested, but hardware has a lot of issues. For us, roughly speaking there are two worlds: the world of bits and the world of atoms. I am personally inclined to stay in the world of bits for now, in other words, software. You can run things in the cloud, it’s much faster. The world of atoms — such as robots — breaks too often and usually has a much slower iteration cycle. This is a very active discussion that we’re having in the company right now.

Do you think we can actually get to generalized AI?

I think to get to superintelligence we might currently be missing differences of a “kind,” in the sense that we won’t get there by just making our current systems better. But fundamentally there’s nothing preventing us getting to human-like intelligence and beyond.

To me, it’s mostly a question of “when,” rather than “if.”

I don’t think we need to simulate the human brain to get to human-like intelligence; we can zoom out and approximate how it works. I think there’s a more straightforward path. For example, some recent work shows that ConvNet* activations are very similar to the human visual cortex’s IT area activation, without mimicking how neurons actually work.

[*SF: ConvNet, or convolutional network, is a type of artificial neural network topology tailored to visual tasks first developed by Yann LeCun in the 1990s. IT is the inferior temporal cortex, which processes complex object features.]

how-to-prevent-evil-ai-8So it seems to me that with ConvNets we’ve almost checked off large parts of the visual cortex, which is somewhere around 30% of the cortex, and the rest of the cortex maybe doesn’t look all that different. So I don’t see how over a timescale of several decades we can’t make good progress on checking off the rest.

Another point is that we don’t necessarily have to be worried about human-level AI. I consider chimp-level AI to be equally scary, because going from chimp to humans took nature only a blink of an eye on evolutionary time scales, and I suspect that might be the case in our own work as well. Similarly, my feeling is that once we get to that level it will be easy to overshoot and get to superintelligence.

On a positive note though, what gives me solace is that when you look at our field historically, the image of AI research progressing with a series of unexpected “eureka” breakthroughs is wrong. There is no historical precedent for such moments; instead we’re seeing a lot of fast and accelerating, but still incremental progress. So let’s put this wonderful technology to good use in our society while also keeping a watchful eye on how it all develops.

Image Credit: Shutterstock.com

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured