Artificial Intelligence is the hot tech paradigm of the moment. It is the subject of a great deal of media hype, woes and mythologising. It seems worthwhile, therefore, to try to set the scene, look at some definitions, and see where it is currently being applied.
With regard to definitions, well that is unfortunately not quite so straightforward. Terms such as ‘neural network’, ‘deep learning’ and ‘general AI’ are used frequently in the media, and often interchangeably. The OED defines ‘artificial intelligence’ as ‘…the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.’
This does not feel terribly useful — defining a system as artificially intelligent if it can perform a task that would require regular human intelligence begs the question — what is intelligence from a human point of view?
The word ‘intelligence’ derives from the Latin terms intelligentia and intellēctus, which originally roughly translated as the ability to perceive, comprehend or understand something. More recently intelligence has served in many ways as a catch-all term to include the ability to demonstrate logical coherence, show reason, behave autonomously and to be adaptable to situations. The term has also widened through the work of Howard Gardner and others to include emotional intelligence, spatial awareness, kinesthetic intelligence, and many more aspects besides.
The short answer is that we have not been able to successfully pin ‘intelligence’ down to a tight definition, and this is as one might expect also the case with ‘artificial intelligence’.
Therefore, we can apply ‘artificial intelligence’ to fields as wide as computer audio or visual recognition, self-driving vehicles, robots that can respond autonomously to their environments, recommendations of films via Netflix, and financial analysis. What does seem to set the bar for an ‘artificially intelligent’ system is some aspect of autonomy — some ability to perform a task without a human operative constantly intervening — and a demonstration of adaptability, as in a system which can learn to change and work towards improving its performance, by learning from its own experiences.
The adaptability element mentioned above is what gives rise another very fashionable term — ‘Machine Learning.’ Machine learning enables systems to use algorithmic techniques in order to progressively improve upon their performance. The algorithms make predictions on data and are able to modify their predictions based on the eventual outcomes. A simple example would be the use of spam filtering tools built within Gmail — the algorithms observe instances of spam messages, and they make predictions as to whether incoming email messages constitute spam or not. This technique has evolved over time, from a fairly traditional rule-based system, where the Google anti-spam team would derive rules that would match individual spam patterns, to a machine-learning approach using TensorFlow data matching principles that mean that Gmail is constantly learning and re-learning precisely what constitutes spam. This automation is much less labour-intensive than using a human solution, and it is far more effective in terms of spam emails detected, and the speed with which they are found and blocked.
Artificial Intelligence, then, appears in some ways to mirror human intelligence. In some ways. And in some circumstances. It usually involves a degree of autonomy and adaptability, and the term is used across a huge number of different computing and non-computing disciplines. It is however, constantly shifting and being redefined, and being applied in a range of different, unexpected circumstances.
Just like human intelligence, funnily enough.
One of the earliest machine learning use cases for G Suite was within Gmail. Historically, Gmail used a rule-based…www.blog.google