While this may seem far-fetched to many, a lot of experts in the field are concerned about the potential impact of artificial intelligence. An excellent treatise about this subject can be read at Wait But Why, it explains that while it takes a long time for machines to reach human-level intelligence, the hyperbolic growth curve of AI has quite shocking implications. AI doesn't just stop at human-level intelligence, it's a starting point towards reaching levels of intelligence we can hardly imagine.
Musk has long warned about the dangers of AI so it's no surprise he's co-chairing this project with Sam Altman, the CEO of Y Combinator. OpenAI will be a research lab meant to counteract governments and large organizations from gaining too much power by owning super-intelligence systems.
Elon Musk: As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a 501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety.More about OpenAI can be read at Medium.
And then philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought?—?do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good.