What is deep learning? Is it a magic process that does what the brain does? Like any term that becomes a buzzword, “deep learning” has been defined many times by many people, and some of those definitions are considerably different from each other. Progress in technology is often marked by spectacular claims for a new technique or concept; “deep learning” is no exception. No sooner had the term been coined, than people were making large-scale claims for it, unrelated to its original application, for example: “Deep learning is an approach and an attitude to learning, where the learner uses higher-order cognitive skills such as the ability to analyse, synthesize, solve problems, and thinks meta-cognitively in order to construct long-term understanding.”
The current Wikipedia definition (July 2016) may be correct but is not very intelligible to non-specialists: “a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.”
In contrast, a down-to-earth definition came from Jack Rae, Google DeepMind Research Engineer, in a post: “Deep learning refers to artificial neural networks that are composed of many layers.” This is certainly intelligible, but perhaps not a very complete explanation.
Unfortunately, many definitions try to claim that deep learning is following how the brain works. How about this one, from the Dataversity site? It couldn’t be more all-embracing: “Deep Learning tries to emulate the functions of the inner layers of the human brain … each time new data is poured in, its capabilities get better.”
More lengthy descriptions, such as one by Pete Warden, suggest that the “deep” aspect of deep learning is not emulating the brain, but applying new techniques “that allow us to build and train neural networks to handle previously unsolved problems”. More precisely, he describes deep learning as “a mechanical process to take the weights from initial random values to progressively better numbers that produce more accurate predictions”.That starts to become clearer.
Yann LeCun, head of the Artificial Intelligence Lab at Facebook, in an interview, came up with great right (and wrong) definitions. As for the wrong definition, he stated “My least favorite is ‘It works just like the brain’”. Asked to give his own eight-word definition, he suggested “machines that learn to represent the world”. That’s pretty simple – and only seven words long.
Here at UNSILO, we make use of deep learning to carry out effective and powerful concept extraction. But we don’t claim we are replicating the human brain in the process, or that we are thinking meta-cognitively.