AI Paradigms

A recent Technology Review article citing trends in AI research caught my eye. While we like to say there’s nothing new under the sun, AI research should be literally new–or is it?

The three most common paradigms in the pursuit of true AI are knowledge-based systems, machine learning, and more recently reinforcement learning.

But these aren’t new approaches at all. Many have been around since the 1950s. But as processing speeds increase and neural networking comes into its own, they are all vying for the limelight.

Knowledge based systems were very popular initially because they boiled the knowable world down into rules. If there’s a thought, there’s a rule for it, the logic went. So researchers built more and more rules until the systems were such a glut of rules that they couldn’t get out of their own way.

Supervised learning, using neural networks, got a big boost in 2012 when Geoffrey Hinton and his team from the University of Toronto beat everyone at ImageNet by more than ten percentage points–moving the needle not incrementally, but massively. But deep learning requires so much labeled data that it can take weeks and require petabytes of data to build a knowledge set with discriminative capabilities. This is what we generally think of as Machine Learning.

The new, old, kid on the block, Reinforcement Learning, has seen a huge increase in research paper mentions. Reinforcement “mimics the process of training animals through punishments and rewards.” Reinforcement languished for many years until 2015 when DeepMind’s AlphaGo beat the Go world champion.

So whether its Reinforcement Learning, Knowledge Based systems or Bayesian networks, no one knows. In the article Pedro Domingos of the University of Washington and author of The Master Algorithm says either another older paradigm will rise to the surface again or we could see something entirely new.