Computer scientists have been experimenting with neural networks since the 1950s. But two big breakthroughs—one in 1986, the other in 2012—laid the foundation for today's vast deep learning industry. The 2012 breakthrough—the deep learning revolution—was the discovery that we can get dramatically better performance out of neural networks with not just a few layers but with many. That discovery was made possible thanks to the growing amount of both data and computing power that had become available by 2012.You can read the post over here.
This feature offers a primer on neural networks. We'll explain what neural networks are, how they work, and where they came from. And we'll explore why—despite many decades of previous research—neural networks have only really come into their own since 2012.
What are neural networks and why are they so important?
Posted on Tuesday, Dec 03 2019 @ 13:17 CET by Thomas De Maesschalck
ARS Technica takes a look at neural networks. Once a scientific curiosity in the 1950s, these networks are now a massive industry as a lot of technology that we interact with on a daily basis has adopted some form of artificial intelligence. Deep learning is one of the techniques, it uses neural networks that are loosely inspired by networks of biological neurons. Since 2012, there's been a huge adoption of neural networks as advances in computing power finally made it possible to use them on a broad scale.