Saturday, 22 April 2017

Encyclopedia of Machine Learning by Claude Sammut, Geoffrey I. Webb Free Download PDF

ENCYCLOPEDIA OF MACHINE LEARNING

BY CLAUDE SAMMUT, GEOFFREY I.WEBB

FREE DOWNLOAD PDF




CONTENTS : ALL THE WORDS INCLUDING THEIR MEANING, EXAMPLES (IF ANY), ILLUSTRATIVE FIGURES (IF ANY) REGARDING TO MACHINES FROM A TO Z


BASIC INTRODUCTION TO THIS BOOK :

Abduction

Definition : 
Abduction is a form of reasoning, sometimes described as “deduction in reverse,” whereby given a rule that “A follows from B” and the observed result of “A” we infer the condition “B” of the rule. More generally, given a theory, T, modeling a domain of interest and an observation, “A,” we infer a hypothesis “B” such that the observation follows deductively from T augmented with “B.” We think of “B” as a possible explanation for the observation according to the given theory that contains our rule is new information and its consequences (or ramications) according to the given theory can be considered as the result of a (or part of a) learning process based on the given theory and driven by the observations that are explained by abduction. 

Abduction can be combined with induction in different ways to enhance this learning process. Motivation and Background Abduction is, along with induction, a synthetic form of reasoning whereby it generates, in its explanations, new information not hitherto contained in the current theory with which the reasoning is performed. As such, it has a natural relation to learning, and in particular to knowledge intensive learning, where the new information generated aims to complete, at least partially, the current knowledge (or model) of the problem domain as described in the given theory.

Structure of the Learning Task :
Abduction contributes to the learning task by first explaining, and thus rationalizing, the training data according to a given and current model of the domain to be learned of these abductive explanations either form on their own the result of learning or they feed into a subsequent phase to generate the final result of learning.

Adaptive Resonance Theory

Definition : 

Adaptive resonance theory, or ART, is both a cognitive and neural theory of how the brain quickly learns to categorize, recognize, and predict objects and events in
a changing world, and a set of algorithms that computationally embody ART principles and that are used in large-scale engineering and technological applications
wherein fast, stable, and incremental learning about complex changing environment is needed. ART clarifies the brain processes from which conscious experiences
emerge. It predicts a functional link between processes of consciousness, learning, expectation, attention, resonance, and synchrony (CLEARS), including the prediction that “all conscious states are resonant states.” this connection clarifies how brain dynamics enable a behaving individual to autonomously adapt in real time to a rapidly changing world. ART predicts how top-down attention works and regulates fast stable learning of recognition categories. In particular, ART articulates a critical role for “resonant” states in driving fast stable learning; and thus the name adaptive resonance. These resonant states are bound together, using top-down attentive feedback in the form of learned expectations, into coherent representations of the world. ART hereby clarifies one important sense in which the brain carries out predictive computation. ART has explained and successfully predicted a wide range of behavioral and neurobiological data, including data about human cognition and the dynamics of spiking laminar cortical networks. ART algorithms have been used in large-scale applications such as medical database prediction, remote sensing, airplane design, and the control of autonomous adaptive robots.

Motivation and Background : 

Many current learning algorithms do not emulate the way in which humans and other animals learn.The power of human and animal learning provides high motivation to discover computational principles whereby machines can learn with similar capabilities. Humans and animals experience the world on they, and carry out incremental learning of sequences of episodes in real time. Often such learning is unsupervised, with the world itself as the teacher. Learning can also proceed with an unpredictable mixture of unsupervised and supervised learning trials. Such learning goes on successfully in a world that is non stationary; that is, the rules of which can change unpredictably through time.Moreover, humans and animals can learn quickly and stably through time. A single important experience can be remembered for a long time.

Inductive Transfer

Definition
Inductive transfer refers to the ability of a learning mechanism to improve performance on the current task after having learned a different but related concept or skill on a previous task. Transfer may additionally occur between two or more learning tasks that are being undertaken concurrently. Transfer may include background knowledge or a particular form of search bias. As an illustration, an application of inductive transfer arises in competitive games involving teams of robots (e.g., Robocup Soccer). In this scenario, transferring knowledge learned from one task into another task is crucial to acquire skills necessary to beat the opponent team. Specifically, imagine a situation where a team of robots has been taught to keep a soccer ball away from the opponent team. To achieve that goal, robots must learn to keep the ball, pass the ball to a close teammate, etc., always trying to remain at a safe distance from the opponents.Now let us assume that we wish to teach the same team of robots to play a different game where they must learn to score against a team of defending robots. Knowledge gained during the rest activity can be transferred to the second one. Specifically, a robot can prefer to perform an action learned in the past over actions proposed during the current task because the past action has a significant higher merit value. For example, a robot under the second task may learn to recognize that it is preferable to shoot than to pass the ball because the goal is very close. This action can be learned from the first task by recognizing that the precision of a pass is contingent on the proximity of the teammate. 

Structure of the System :

The main idea behind a learning architecture using knowledge transfer is to produce a source model from which knowledge can be extracted and transferred to a target model. This allows for multiple scenarios. For example, the target and source models can be trained at different times such that the transfer takes place after the source model has been trained; in this case there is an explicit form of knowledge transfer, also called representational transfer. In contrast, we use the term functional transfer to denote the case where two or more models are trained simultaneously; in this case the models share (part of) their internal structure during learning (see Neural Networks below). When the transfer of knowledge is explicit, we denote the case as literal transfer when the source model is left intact. In addition, we denote
the case as nonliteral transfer when the source model is modified before knowledge is transferred to the target model; in this case some processing step takes place
on the model before it is used to initialize the target model.







USEFUL LINKS


No comments:

Post a Comment