1. Introduction to Intelligent Control
The term intelligent control may be loosely used to denote a control technique that can be carried out using the intelligence of a human who is knowledgeable in the particular domain of control. In this definition, constraints pertaining to limitations of sensory and actuation capabilities and information processing speeds of humans are not considered. It follows that if a human in the control loop can properly control a plant, then that system would be a good candidate for intelligent control. Information abstraction and knowledge-based decision making that incorporates abstracted information are considered important in intelligent control. Unlike conventional control, intelligent control techniques possess capabilities of effectively dealing with incomplete information concerning the plant and its environment, and unexpected or unfamiliar conditions. The term adaptive control is used to denote a class of control techniques where the parameters of the controller are changed (adapted) during control, utilizing observations on the plant (i.e., with sensory feedback), to compensate for parameter changes, other disturbances, and unknown factors of the plant. Combining these two terms, one may view intelligent adaptive control as those techniques that rely on intelligent control for proper operation of a plant, particularly in the presence of parameter changes and unknown disturbances.
There are several artificial intelligent techniques that can be used as a basis for the development of intelligent systems, namely expert control, fuzzy logic, neural network, and intelligent search algorithms.
In this class, we will study some fundamental techniques and some application examples of expert control, fuzzy logic, neural networks, and intelligent search algorithms. The main focus here will be their use in intelligent control.
The artificial intelligent techniques should be integrated with modern control theory to develop intelligent control systems.
In this class, we study intelligent control in four parts: expert control, fuzzy logic and control, neural network and control, and genetic algorithm.
1.1 Expert Control
Expert control is control tactics to use expert knowledge and experience. Expert control comes from expert system, it was proposed by K.J. Astrom in 1986 [], and its main idea is to design control tactics with expert knowledge and experience.
1.2 Fuzzy Logic Control
Fuzzy logic is useful in representing human knowledge in a specific domain of application, and in reasoning with that knowledge to make useful inferences or actions.
In particular, fuzzy logic may be employed to represent, as a set of fuzzy rules, the knowledge of a human controlling a plant. This is the process of knowledge representation. Then, a rule of inference in fuzzy logic may be used according to this fuzzy knowledge base, to make control decisions for a given set of plant observations. This task concerns knowledge processing. In this sense, fuzzy logic in intelligent control serves to represent and process the control knowledge of a human in a given plant.
There are two important ideas in fuzzy systems theory:
The real world is too complicated for precise descriptions to be obtained; therefore, approximation (or fuzziness) must be introduced in order to obtain a reasonable model.
As we move into the information era, human knowledge becomes increasingly important. We need a theory to formulate human knowledge in a systematic manner and put it into engineering systems, together with other information like mathematical models and sensory measurements.
From the fuzzy universal approximation theorem [], fuzzy system can approximate any nonlinear function, which can be used to design adaptive fuzzy controller. By adjusting a set of weighting parameters of a fuzzy system, it may be used to approximate an arbitrary nonlinear function to a required degree of accuracy.
1.3 Neural Network and Control
Artificial neural networks are massively connected networks that can be trained to represent complex nonlinear functions at a high level of accuracy. They are analogous to the neuron structure in a human brain.
It is well known that biological systems can perform complex tasks without recourse to explicit quantitative operations. In particular, biological organisms are capable of learning gradually over time. This learning capability reflects the ability of biological neurons to learn through exposure to external stimuli and to generalize. Such properties of nervous systems make them attractive as computation models that can be designed to process complex data. For example, the learning capability of biological organisms from examples suggests possibilities for machine learning.
Neural networks, or more specifically, artificial neural networks, are mathematical models inspired from our understanding of biological nervous systems.
They are attractive as computation devices that can accept a large number of inputs and learn solely from training samples. As mathematical models for biological nervous systems, artificial neural networks are useful in establishing relationships between inputs and outputs of any kind of system. Roughly speaking, a neural network is a collection of artificial neurons. An artificial neuron is a mathematical model of a biological neuron in its simplest form. From our understanding, biological neurons are viewed as elementary units for information processing in any nervous system. Without claiming its neurobiological validity, the mathematical model of an artificial neuron is based on the following theses:
- (1)
Neurons are the elementary units in a nervous system at which information processing occurs.
- (2)
Incoming information is in the form of signals that are passed between neurons through connection links.
- (3)
Each connection link has a proper weight that multiplies the signal transmitted.
- (4)
Each neuron has an internal action, depending on a bias or firing threshold, resulting in an activation function being applied to the weighted sum of the input signals to produce an output signal.
Since the idea of the computational abilities of networks composed of simple models of neurons was introduced in the 1940s, neural network techniques have undergone great developments and have been successfully applied in many fields such as learning, pattern recognition, signal processing, modeling, and system control. Their major advantages of highly parallel structure, learning ability, nonlinear function approximation, fault tolerance, and efficient analog VLSI implementation for real-time applications greatly motivate the usage of neural networks in nonlinear system identification and control.
In many real-world applications, there are many nonlinearities, unmodeled dynamics, unmeasurable noise, and multiloop, which pose problems for engineers to implement control strategies.
BP or RBF neural network can approximate any nonlinear function [], which can be used to design adaptive neural network controller. By adjusting a set of weighting parameters of a neural network, it may be used to approximate an arbitrary nonlinear function to a required degree of accuracy.
1.4 Intelligent Search Algorithm