Introduction
What is Machine Learning (ML)?
There is a general introduction to Machine Learning on Wikipedia.
We
have no intention of cloning the content of that article here,
unlike many people trying to explain what is ML.
Everybody can understand what is Machine Learning because the
definition is contained in the name, these are machines that
“learn”, e.g machines which are able to consume data in order to
improve their tasks given feedback from the environment where they
do perform these tasks
As humans we usually learn a lot - at least we should. We learn
from
our childhood the alphabet, how to write and read, usually
painfully. We learn calculus, computations and how to distinguish
the various objects that will populate our everyday’s life.
The mechanisms of learning - at least in our childhood- are
fundamentals. We know example of wild childs like ‘wolf children’
or
‘dog children’’ or other cases of child raised by primates. In
these
cases, the humans did not learn properly and were at least partly
“trained” by non-humans (animals). The result was a deficiency in
human intelligence and an inability to resolve basic problems that
any other “properly trained” humans could address.
In our lives, we usually learn from mistakes and errors. Things
that
hurt us or fool us and we are able to solve such errors, mainly by
learning and adapting to the new situation.
The basis of Machine Learning age from a 1950 paper authored by
Alan
Turing and named "Computing Machinery and Intelligence". In that
paper, Turing offers to replace the question “Can Machine think?”
by
other questions such as “Can Machine do our work?” , “can Machine
win a game ?”, and among others “Can machine learn ?”.
In fact, Turing mentioned the Learning Machines and not the
Machine
Learning which is anyway the “ancestor” concept.
A Machine that could learn would be a “child” machine and would
experience ,same as the humans, heredity, mutation, experiments
and
choices following the experiments. Complexity would be at stake so
that despite strong “formal” and “imperative” logic, a self-built
model with a proper logic would be built as a result of the
learning. This presuppose ignorance of the child machine and also
some strong randomness.
A Machine Learning is therefore defined as an algorithm which is
somewhat mimicking the Human learning process.
The Human learning process is complicated. It involves heredity,
mutations, evolutions, probability, randomness, neurons, memory
and
other factors. The Machine Learning algorithms use often partially
the aforementioned factors.
General Principles of ML
The “primitive” idea behind ML is a machine which is fed by
data, a “training set” and which builds a model, usually a
decision, choice and strategy model, using these data in order
to perform some tasks in an environment, independently of any
human intervention, depending on the response of the
environment, a feedback can be generated, which can take various
shapes : “right” or “wrong” or a decimal score of a feature
vector for instance. That feedback, coupled to the data from the
environment can be integrated into the previous dataset to
create a new dataset from which a ne model can be built and so
on.
This presuppose the following components:
-
The data (“learning set”);
-
A model builder (the ML in itself);
-
Task(s) to perform in an environment;
-
The environment;
-
Feedback from the tasks.
Learning machines and Machine Learning
At this stage, the reader understands that a learning machine
is a Machine Learning with data, tasks and interaction with an
environment. As itself, machine learning is doing nothing and
unable to evolve if nothing is “injected” inside.
A learning machine is, therefore, a dynamical self-evolving
system and the way it interacts and learns from its errors and
from the environment is as much as fundamental as the
algorithm/model it implements.
Machine Learning and Dynamical systems
As the ML will learn and be trained and as we iterate the
training steps, the model will go through the space of all the
possible models
When the learning algorithm converges, this is similar to a
dynamical system reaching an equilibrium state.
The “Learning” phase of a learning machine can be seen as a
time system, especially if loops are introduced. This is
especially true for Recurrent Neural Networks (RNNs).
In the original concept by Turing, the learning phase of a
machine is similar to the education of a child and therefore
induces time, contingents, possibilities etc… which makes it
relevant to be interpreted as a dynamical system.
In such an approach, concepts as Markov chains, Lyapunov
stability are very relevant to ML in general.
Supervised, unsupervised and
Reinforcement Learning
The learning process of ML fall into two three different
categories: supervised and unsupervised learning, or
reinforcement learning.
Supervised learning can be seen as the machine been educated
through a human “supervisor” which guide in some ways the
learning while unsupervised learning can be seen as “wild”
learning - more like raw self-organization and convergence to an
equilibrium.
The difference may also be seend between the human process of
learning, where learning consists mainly of education with
teachers (“supervisors”) while animals learns more from their
environment in general and self-adapt to that environment
without supervision and this relatesd more to unsupervised
learning.
What is known as “street smart” also refers largely to some
unsupervised learning, eg the ability to learn without guidance
and supervision into an unknown environment.
Unsupervised learning can be seen as well as Human research,
facing unknown situation and learning from these unknown
situations, eventually self-creating labelled
categories.
A mixture of supervised and unsupervised learning is always
possible. This situation is named semi-supervised learning.
In general, supervised learning in ML will use labelled data.
These are data - which can be of any sort - provided with a
label (either by humans either from other machines ) . This
label indicates the nature of the data and how they must be
interpreted by the machine. The task of ML will therefore be to
consume a new unlabelled set of data and guess, predict and
label them (eventually creating new labels) or classify them
(fixed finite amount of labels).
A third category ( the three basic ML paradigms) is
reinforcement learning. In that approach the teaching is
conditioned by ‘rewards’ or ‘penalties’. The analogy with human
learning is obviously the punition from parents, the fines and
various punishments from the law enforcement systems or the
gifts, ‘bonus’, various perks and medals provided by the
educational, military or civilian bodies.
Reinforcement learning is especially relevant to game theory.
Multi-agents ML
A set of Ml units may be coordinated and cooperative. In that
case we speak of Multi-agents ML. For instance a group of 1
million of ANNs which are single agents ML may be connected
together - as black boxes - and cooperate together with further
algorithms. This can be seen as a “team” for instance.
Key Differences Between Artificial Intelligence and Machine
Learning
In this article, we wish to discuss fundamental and even
axiomatic questions about the true nature of what is known as
Artificial Intelligence (AI) and how it differs from its most
common representation, Machine Learning (ML). Check out more
about AI and ML.
Read More
The AI Algorithms That We Borrowed From Nature
Artificial Intelligence is usually conceived as the
manifestation of any non-human intelligence, this later one is
described in opposition as “natural” intelligence.
Humans are not the only creatures living on earth which are
capable of computing and demonstrating intelligence. Fortunately
enough for us, human intelligence is usually regarded as – by
far – the highest in their environment.
Read More