Share this:

What is Machine Learning?

What is Machine Learning

What is machine learning exactly? The question may seem basic, but the answer can be complicated.

The reason that there seems to be no common ground in the definition of machine learning is that everyone involved in this computer-based science is approaching the question from a different angle.

Developers will answer from a developmental point of view, statisticians from a statistical one, and mathematicians from, yes, you guessed it, a mathematical one.

The different approaches to answering the question of what machine learning is does not make any of the aforementioned fields wrong, they simply give different perspectives to the answer.

We, however, are not looking for perspective at this point, but rather a generic answer that covers machine learning in its entirety.

As one AI expert Jason Brownlee puts it:

‘Machine learning is like farming or gardening. Seeds is the algorithms, nutrients is the data, the gardner is you and plants is the programs’

The Basic Concepts of Machine Learning

The basic concept of machine learning is to get computers to program themselves over time by collecting data and then to react and act on it.

How they do this is through using algorithms, of which there are many, that all contain three elements.

Representation – the language that a computer understands or a group of classifiers.

Evaluation – the scoring function otherwise known as the objective.

Optimisation – the search method used which is more often than not the highest scoring classifier. Both custom and off-the-shelf methods can be used.

Machine learning Definitions by the Experts

Arthur Samuel

Image Courtesy IBM Arthur Samuel’s Machine Leaning Program Playing Checkers
Image Courtesy IBM. Arthur Samuel’s Machine Leaning Program Playing Checkers

There is no better place to start when giving a definition of machine learning than with Arthur Samuel who is a true pioneer of this technology.

Whilst Arthur Samuel was not the only person to explore the technology of machine learning in the 1950s he is believed to be one of the first to become successful in giving a succinct demonstration of the concept that is artificial intelligence.

He did this by creating a checkers program that allowed the computer to play games against itself.

Over a relatively short time, the computer played itself at checkers thousands of time and began to recognise patterns of play that would either lead to a loss or win.

It learned which moves to make that would result in a higher percentage chance of a win and what moves not to make as they would most likely lead to a loss. The end result was a computer that played checkers better than Samuel’s ever could.

To describe this concept of a computer being able to learn without being explicitly programmed to do so Arthur Samuel coined the term ‘machine learning’ which we still use today. He defined this technology as a:

‘field of study that gives computers the ability to learn without being explicitly programmed.’

Tom Mitchell

Tom Mitchell Professor of Machine Learning Department School of Computer Science Carnegie Mellon University
Tom Mitchell Professor of Machine Learning Department School of Computer Science Carnegie Mellon University

Tom Mitchell is well-known for his significant contributions to the growth of cognitive neuroscience, artificial intelligence, and machine learning. In his book aptly titled ‘Machine Learning,’ he defines this technology with the words:

‘The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience.’

Mitchell also goes on to provide a second more formally mathematical and logically defined definition of machine learning saying:

‘A computer program is said to learn from experience E with respect to some class of tasks ‘T ‘and performance measure ‘P’, if its performance at tasks in ‘T’, as measured by ‘P’, improves with experience ‘E.’

Stephen Marsland

In his book ‘Machine Learning: An Algorithmic Perspective’ Stephen Marsland takes the definition of machine learning used by Tom Mitchell and expands on it, by using as stated in the title, an algorithmic approach.

He explains this motivation in his 1st edition prologue stating his definition of machine learning which is:

‘One of the most interesting features of machine learning is that it lies on the boundary of several different academic disciplines, principally computer science, statistics, mathematics, and engineering. …machine learning is usually studied as part of artificial intelligence, which puts it firmly into computer science …understanding why these algorithms work requires a certain amount of statistical and mathematical sophistication that is often missing from computer science undergraduates.’

This is an important definition as it draws us insightfully away from believing that machine learning stems from just one discipline of science. Rather it is a multidisciplinary field in nature.

Stephen Marsland is a professor of scientific computing at the Massey University in Palmerston North. His research interests include mathematical computing, machine learning and algorithms.

Andrew Ng

Andrew Ng
Andrew Ng

Andrew Ng is undeniably a titan of the machine learning industry. In his illustrious career, he has co-founded and led Google Brain, been a chief scientist at Baidu, and co-founded Cousera, a pioneer in online training.

He has also recently written a book on machine learning entitled ‘Machine learning Yearning’ which is a practical guide for those who may be interested in the technology. Ng distributed this book for free.

Over the years Andrew Ng has undoubtedly given many expert definitions of what machine learning is. None of these are as powerful, however, as his overall definition of machine learning:

‘AI is the new electricity. Just as 100 years ago electricity transformed industry after industry, AI will now do the same.’

Examples of machine learning in companies

Twitter

Twitter's Torch Decision Trees
Twitter’s Torch Decision Trees

In 2017, billionaire Mark Cuban purchased stock in the social media platform Twitter. Now, this may not sound like anything strange after all people invest in companies all the time.

However, this was different as the decision to buy the stock was based on, as Cuban said himself in an interview with CNBC, that ‘they finally got their act together with artificial intelligence.’

Twitter, of course, is a hybrid of blogging and instant messaging at heart that also crucially reports on news, announces events, and promotes businesses.

It has over 336 million monthly users who interact with their followers. This makes it an ideal ground for the use of artificial intelligence and machine learning to enhance the user’s experience and ultimately make money for the company.

In a company blog post in June 2016, Twitter founder Jack Dorsey announced that they had bought Magic Pony Technology with the intention to ‘build the most advanced AI platform in the world.’ It renamed the division to Twitter Cortex

This translated into machine learning which consists of taking in lots of data, processing it and learning over time what content is most relevant to all their users.

It does this by:

  • Considering the content of the tweet such as text, photo, and video
  • Looking to see if there is past interaction between author and reader
  • Considering if the tweet has a tone or content that has been previously appreciated

Ultimately, the higher the relevance the tweet has to the user the higher it will appear on their feed. Tweets of highest relevance will usually be found in the users ‘In case you missed it’ section.

Showing user‘s the most relevant and most interesting to the content is not the only use that machine learning has in regard to Twitter. It has also proved to be extremely useful and reliable in fighting inappropriate and racist content making it onto the platform.

According to the Financial Times, from January to June 2017 alone machine learning was responsible for taking down approaching 300,000 terrorist accounts.

In fact, only 5% of suspended terrorism suspected accounts are taken down by human users, 95% are the domain of AI.

Apple

Apple's A12 chip
Apple’s A12 chip

You could be forgiven for thinking that Apple has shown up incredibly late to the machine learning party rather than the actuality which is that they are just fashionably late.

After all, Apple did introduce Siri, a voice assistant, to us all on a smartphone before anyone else and is planning to extend that application with a new smart home device, the HomePod.

Having said they are behind the times and have their work cut out to catch up with other companies such as Amazon and Google who already have ‘Alexa’ and the ‘Google Home Assistant’ Apple is making huge strides in other areas of machine learning.

Apple A12 AI Chip 

This includes the A12 chip which has been designed within the company by their own developers and has a magnificent 5 trillion neural engine operations per second capacity.

In terms of machine learning, this has enabled the Apple iPhone to automatically recognise which tasks it should run using the GPU, which tasks it should run using the neural engine, and which tasks should be run with the primary part of the chip.

A capability which has resulted in an extended battery life on a mobile phone that was already renowned for its lengthy periods between charges.

The neural engine itself is capable of a complex machine learning. Following on from Apples ‘Touch ID’ the newly launched ‘Face ID’ uses an algorithm of neural networks to plot facial features. The neural engine has been trained to do this, using millions and millions of images to deter any mistakes.

On the downside, the machine learning involved in this feature does need making even more accurate in regard to features such as hair and glasses. Apple has, however, promised that this accuracy will be improved upon this year.

The neural engine can also use its neural network to capture even better pictures. With machine learning, it can make a clear decision of what is the object in the photograph and what is the background.

This is a fine example of just how fast the neural engine works considering the time it takes for you to press the shutter button and take a picture It is a fine example of just how far machine learning has come.

Google

Google ai

Probably regarded as one of the most advanced companies in the area of machine learning Google first made public their explorations of the possibilities it held in 2011 when they announced their ‘Google Brain’ project.

Just twelve months later they were back with their announcement that they had indeed built a neural network that could copy the human process of understanding and learning.

Following this, in 2014, Google acquired the machine learning startup ‘Deep Mind’ which was concentrating on joining an already existing machine learning technique with cutting-edge research in neuroscience.

This was a real attempt and success story in leading computers to greater resemble ‘real brains’ than they had done before.

From a practical point of view, Google has used this technology across many of their services. Image recognition was put to work initially to sort through millions of images and more accurately classify them giving Google users much more accurate search results.

Similar to Twitter it also analysed the types of viewing habits users had whilst searching for YouTube content from their servers and provided suggestions based on the information they gained. This, they have found, keeps users watching on their server and the advertising money flowing in.

More recently, early this year Google launched an artificial intelligence chat bot that can answer messages for its users on platforms such as Twitter, Skype, and Slack.

These have, however, come under some fire for machine learning perhaps being a little too clever and resulting in the use of profanity and growing bigotry.

How Do We Get Machines To Learn?

How Do We Get Machines To Learn

The first way you get a machine to learn is to choose the method to be used for machine learning. There are at present four types of machine learning which are as follows:

Supervised learning (inductive learning)

Supervised learning involves using training data that already has it desired outputs included. For example, this is not a spam email, but this one is.

Semi-supervised learning

Semi-supervised learning is machine learning where a few desired outputs are included in the training.

Unsupervised learning

Unsupervised learning is where the training data given does not include any desired outputs at all.

Reinforcement learning

Reinforced learning is the hardest form of machine learning and involves giving rewards from a sequence of actions. This is by a long way the most ambitious type of machine learning.

To get a machine to learn we also need to give them an algorithm to work with. There are a multitude of ways to achieve machine learning using algorithms such as deep learning, decision trees, neural networks, and K-means clustering.

Which one of these algorithms you use will depend on what you are trying to achieve, how much data you have and what type it is.

A Five-Step Process of Machine Learning

A Five-Step Process of Machine Learning

1. Identify the problem

When identifying the problem you wish to solve with machine learning, you need to take three steps. First, you need to decide exactly what the problem is and describe it.

Secondly, ask yourself why this problem needs solving and the benefits that will ensue from solving it with machine learning. Finally, think about how you would solve this problem manually and without machine learning intervention.

2. Ready the data

Once you have correctly identified your problem you will need to gather the data needed. This is a hugely important step as the quantity and quality of data that you input will determine how well the machine learning will function.

When inputting the data, you have collected you should always make sure it is in a randomized order as you do not want the order in which you give the factors in the problem to affect the results you receive.

In other words, you do not want your machine learning to be dependent on or decided by the order information is received.

3. Check the algorithms

The next step in the process of machine learning is to check the algorithms are working by running 10-20 standard algorithms from all the different types of algorithm across the dataset.

The goal of this is to be able to get rid of any types of algorithm and data mixtures that are failing and to spotlight and study further those that are working well.

4. Improve the results

The fourth step in the process of machine learning is to improve on the results that you have attained. This involves:

  • algorithm tuning – treating discovering the best models like a search problem.
  • ensemble methods – combining predictions made by multiple models
  • extreme feature engineering – pushing the limits of aggregation and decomposition

5. Present the results

Finally, you will need to present the results of the problem you have solved using machine learning. After all, the results are pretty much useless unless they are going to be put to work.

To do this clearly, whether it is for a stakeholder, or just yourself, you will want to present your results in a clear way such as:

  • Why and where the problem you have solved using machine learning exists
  • What exactly the problem was
  • What the solution is
  • What you discovered
  • What the limitations of the machine learning may be

Types Of Machine learning Algorithms

Types Of Machine learning Algorithms

As briefly mentioned earlier there are a multitude of machine learning algorithms available to match up with the nearly limitless use of machine learning.

In fact, new algorithms are being produced every day that range from the simple to the highly complex. Here are just a few of the most commonly used:

Decision tree – This algorithm implements observations regarding actions and identifies the preferred path to get to the required destination

K-means clustering – This model places a specified amount of data points into a specific amount of sects based on shared characteristics.

Neural networks – This is an area of machine learning referred to as deep learning. It uses massive amounts of training data to recognise correlations between a large number of variables so that it can learn to process data incoming in the future.

Reinforcement learning – This is another area of machine learning that falls into the category of deep learning. It works on a reward and penalise basis that ensures the algorithm learns the optimal process.

Visual Analytics Of Machine Learning Models

Visual Analytics Of Machine Learning Models
Image: Ganes Kesari

The concept that machine learning models can be better understood and presented in a visual form is fairly new and only in the initial stages of being researched.

The theory, however, is that with so much going on in and around this area that visual representation may make it less difficult for us to assimilate ideas and the concepts around machine learning technology.

In a recent study on this very subject, academics at Tsinghua University in China worked with an idea that they called ‘interactive model analysis’ which was an attempt to have machine learning better understood through visual analytics.

First, they studied methods already used in the area of machine learning and then provided visual interpretations of how and why they behave in the manner they do.

Concern was however voiced in this as they believed a visual framework for machine learning may lead to uncertainties both on the part of humans and machine.

Another consideration when presenting machine learning as a visual picture is the manner in which we convert it into a visual piece. Typical tables and charts, for example, may not illustrate the dynamics of machine learning in an optimal manner.

Ganes Kesari, addressed this very issue recently in an article presenting four elements to be used in visual depictions of machine learning that would ensure the aesthetics would attract the user’s attention and supply benefit.

The elements in this framework are:

1. Design Information – The interpretations of statistics should be appealing visually and have great design and user interface.

Information Design - A static visual presentation can encapsulate & illustrate model results
Image: Ganes Kesari – Information Design – A static visual presentation can encapsulate & illustrate model results.

2. Adaptive Abstraction – The levels of detail and complexity involved in visual depiction should be simplified.

Bret Victor’s ladder of abstraction with a toy car
Bret Victor’s ladder of abstraction with a toy car

3. Model Unravelling – The information provided should flow in a simple and easy way that everyone looking at this visualisation will understand. This is especially true of concepts such as neural networks which are extremely difficult to ascertain.

A methodology for classification models
A methodology for classification models

4. User Interactivity – The way users interact with any machine learning model will heighten when the visualisation is interesting and appealing. Users find appeal more comfortable to work with.

What if modelling for prescriptive action
What if modelling for prescriptive action

Overall, research into machine learning visual depiction is still very much in its initial stages and will need further scrutiny before it becomes a major player.

We need to ensure that all aspects of machine learning are kept intact and that we do not compromise its workings and our understandings of them.

A Brief Dip Into Deep Learning

A Brief Dip Into Deep Learning
Random Multimodel Deep Learning (RDML) architecture for classification

Deep learning is basically a field of machine learning that works with algorithms that have been inspired by the workings of the human brain called neural networks.

Rather than using task-specific algorithms, deep learning uses learning data representations which can be supervised, semi-supervised or unsupervised.

For those now confused who have had experience working with or knowledge of neural networks since the 90s and early 00s perhaps this nuanced perspective from Andrew Ng in his 2013 talk titled ‘Deep learning, self- taught learning and unsupervised feature learning’ might help:

Using brain simulations, hope to: Make learning algorithms much better and easier to use. Make revolutionary advances in machine learning and AI.I believe this is our best shot at progress towards real AI’

Machine Learning Key Takeaways

1. Setting aside a portion of your training data for crosschecking purposes is important as you always want your selected algorithm or particular classifier to act well on new data.

2. Undeniably the most important factor in machine learning that is successful is having enough data to train your model. Features used to describe the data (domain specific) are also of importance.

3. When algorithms do not work or do not work well it is more often than not down to a problem with the training data. This could be noisy data, skewed data or insufficient amounts.

4. Accuracy is not always down to simplicity. No correlation can be made between over fitting and the number of parameters of a model.

5. As we have no control over observational data, experimental data should be used. This could be, for example, information gained from sending varying emails to a random audience.

Share this:

Leave a Reply

avatar
  Subscribe  
Notify of