Guest post with Ali Syed, Founder and CEO of Persontyle
Recent advances in making machines more intelligent – able to see, speak and even think like us – have pointed the way to a new era in Artificial Intelligence (AI). This has been partly due to the breakthroughs in deep learning, a set of algorithms that allow machines to see objects and understand. As they say, AI is finally getting smart with deep learning.
You might be thinking what’s so different about deep learning compared with machine learning as we have always known it. In fact, it’s easy to understand what separates both. Deep learning is different because it allows representation learning, i.e. learning feature representations automatically instead of having to define them manually based on expert knowledge. How is this possible? All you need is large amounts of data (which we have now) and powerful computers (Moore’s law on steroids e.g. GPUs), then you can build systems that can learn what the appropriate data representations are.
Machine learning is a very effective technique, but applying it to scalable problems usually means spending ages manually designing (yes manually) the input features to feed the appetite of the learning algorithms. Researchers (including three of the leading AI experts Geoff Hinton, Yann LeCun and Yoshua Bengio) have developed deep learning algorithms, which can automatically learn feature representations from unlabeled data, thus overcoming the issues of endless engineering.
“You have to realize that deep learning — I hope you will forgive me for saying this — is really a conspiracy between Geoff Hinton and myself and Yoshua Bengio, from the University of Montreal”— Yann LeCun
Deep Learning is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text. At the moment, most of the deep learning algorithms are based on building massive artificial neural networks that are broadly inspired by how our brain works.
“Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. Automatically learning features at multiple levels of abstraction allows a system to learn complex functions mapping the input to the output directly from data, without depending completely on human-crafted features.” — Yoshua Bengio
For more about deep learning algorithms, see for example
Deep Learning has attracted a lot of interest not only from the academic world but also from industry. Just a few decades ago the deep learning movement was an outlier in the world of academia. But now deep learning researchers have the attention of the biggest names on the internet. To the extent that some of these researchers are being paid what a top NFL quarterback prospect will earn.
“Last year, the cost of a top, world-class deep learning expert was about the same as a top NFL quarterback prospect. The cost of that talent is pretty remarkable.” —Peter Lee, Microsoft Research
Top tech companies like Microsoft, Facebook, and Google pay lots of money to have Deep Learning experts work for them, even part-time. This goes to show that the improvement in classification performance through the use of this new class of learning algorithms is more than something of scientific significance: it translates to better user experience, connected devices, internet of things and a step towards the possibility of developing brain inspired intelligent machines.
RE.WORK is hosting a Deep Learning Innovation Summit on 29-30 January in San Francisco. Discover advances in deep learning and smart artificial intelligence from the world’s leading innovators. Learn from the industry experts in speech & image recognition, neural networks and big data. Explore how deep learning will impact communications, manufacturing, healthcare & transportation. Book your tickets here: https://www.re-work.co/events/deep-learning