Natural selection of artificial brains is why great AI will predate decent neuroscience

Published on August 24, 2016

There is a common complaint in the machine learning community today that nobody understands how the best algorithms work. Deep learning, the process of training enormous neural networks to recognize layered abstractions, has dominated all sorts of machine learning tasks, from character recognition to game playing and image captioning. While the original idea of neural networks was inspired decades ago by observed phenomena in brains, neural networks have diverged wildly from the structure of biological brains. Researchers searching for the best results on their machine learning tasks have dreamed up not only complex new architectures, but also all kinds of tips, tricks, and techniques to make their networks more effective.

The trouble is that many of these methods are found empirically, rather than theoretically. Whereas traditional machine learning algorithms are grounded in rigorous mathematical proofs, many popular techniques for achieving top tier results with deep neural networks have their theories and explanations given after the fact, attempting to justify why this thing they tried was so effective.

While this seems remarkable and frustrating to those who develop algorithms, it makes intuitive sense when you take a step back and consider that the same process of trial and error is exactly what led to the development of animal brains. Rather than the application of an elegant and well-reasoned mathematical theory, brains are the product of iterative improvements on whatever worked. Real brains are happy accidents layered on happy accidents - and that's exactly what the artifical neural networks are becoming too. While animal brains architectures were produced over millions of years by accidental improvements naturally selected, neural networks architectures are produced over years by thoughtful hypotheses naturally selected by their performance on machine learning tasks.

So what? There are a couple key insights that follow-on from this. First, as long as so many researchers stay focused on the predictive power of their models - which they likely will, since breaking records on learning benchmarks makes for good papers - then neural network practice will always continue to be far in front of what the theory can explain. Tips, tricks, and happy accidents will continue to compound onto each other, producing better and better results without prior statistical proof.

Second, this means that we should expect a very powerful artifical intelligence to be engineered long before a comprehensive theory emerges to explain how it actually works, or - crucially - before a theory emerges to explain a human brain actually works. If we can't invent a general theoretical mathematical framework for even our own creations, then we should expect one for the product of real evolution to be much further down the line.

Navigate with left and right arrow keys!