A Response to “Why Humans Will Never Understand AI”
by Dennis Stone, Senior Dev Ops Engineer
Article for reference: Why humans will never understand AI – BBC Future
There are generally two outlooks on AI. One is of hope and excitement for the future and the endless possibilities. The other is the fear of a doomsday scenario where we serve an AI overlord. The tone of Beers’ writing leads me to believe he is in the doomsday camp. For example, he quotes Hardesty, who wrote an explanation of neural networks in 2017. He touched on most of the points Beers does, but the reader is left with a more positive outlook (“Explained: Neural Networks”).
The only piece of deep learning that isn’t fully understood is the invisible or hidden layers. He touches on this with corroboration, but the reader (well me anyways) is led to believe that everything in between the input and output layers is mystical and unknown. That isn’t the case. The number of nodes and layers are defined by the network’s creator. These parameters are determined by the complexity of the problem being solved or application use case. Ahmed Gad published a piece on one of my favorite sites, Towards Data Science, that illustrates the methodology in making these choices (Gad). How data makes it through the layers and nodes is also well known, albeit PHD level math and makes my head hurt just looking at it.
The real unknown is why the data takes the path that it does becoming radically transformed from input to output. There are several theories of the why, which are well above my level of understanding, but haven’t been accepted. I have a theory that someone knows exactly how it works, but like Galileo, has been deemed preposterous until eventually proven true.