1. Introduction
In this first chapter, we introduce general optimization problems and the class of combinatorial optimization problems. As a motivation, we present a number of fundamental combinatorial optimization problems that will be revisited along the next chapters of this book. We also contrast exact and approximate solution methods and trace a brief history of approximate algorithms (or heuristics) from A to metaheuristics, going through greedy algorithms and local search. We motivate the reader and outline, chapter by chapter, the material in the book. Finally, we introduce basic notation and definitions that will be used throughout the book.
1.1 Optimization problems
In its most general form, an optimization problem can be cast as
subject to
where F is a feasible set of solutions and f is a real-valued objective function that associates each feasible solution S F to its cost or value f ( S ). In the case of a minimization problem we seek a solution minimizing f ( S ), while in the case of a maximization problem we search for a solution that maximizes f ( S ) over the entire domain F of feasible solutions.
A global optimum of a minimization problem is a solution S F such that
. Similarly, the global optimum of a maximization problem is a solution S F such that
.
Optimization problems are commonly classified into two groups: those with continuous variables, that in principle can take any real value, and those represented by discrete variables, that can take only a finite or a countably infinite set of values. The latter are called combinatorial optimization problems and reduce to the search for a solution in a finite (or, alternatively, countably infinite) set, which can typically be formed by binary or integer variables, permutations, paths, trees, cycles, or graphs. Most of this book is concerned with combinatorial optimization problems with a single objective function, although Section describes a method for continuous problems.
Combinatorial optimization problems and their applications abound in the literature and in real-life, as will be illustrated later in this book. As a motivation, some fundamental combinatorial optimization problems will be presented in the next section, together with examples of their basic applications. These problems will be revisited many times along the next chapters of this book. In particular, they will be formalized and discussed in detail in the next chapter, where we show that some combinatorial optimization problems are intrinsically harder to solve than others. By harder, we mean that state-of-the-art algorithms to solve them can be very expensive in terms of the computation time needed to find a global optimum or that only small problems can be solved in a reasonable amount of time.
Understanding the inner computational complexity of each problem is an absolute prerequisite for the identification and development of an appropriate, effective, and efficient algorithm for its solution.
1.2 Motivation
We motivate our introductory discussion with the description of six typical and fundamental combinatorial optimization problems.
Shortest path problem
Suppose a number of cities are distributed in a region and we want to travel from city s to city t . The distances between each pair of cities are known beforehand. We can either go directly from s to t if there is a road directly connecting these two cities, or start in s , traverse one or more cities, and end up in t . A path from s to t is defined to be a sequence of two or more cities that starts in s and ends in t . The length of a path is defined to be the sum of the distances between consecutive cities in this path. In the shortest path problem , we seek, among all paths from s to t , one which has minimum length.
Minimum spanning tree problem
Suppose that a number of points spread out on the plane have to be interconnected. Once again, the distances between each pair of points are known beforehand. Some points have to be made pairwise connected, so as to establish a unique path between any two points. In the minimum spanning tree problem , we seek to determine which pairs of points will be directly connected such that the sum of the distances between the selected pairs is minimum.
Steiner tree problem in graphs
Assume that a number of terminals (or clients) have to be connected by optical fibers. The terminals can be connected either directly or using a set of previously located hubs at fixed positions. The distances between each pair of points (be them terminals or hubs) are known beforehand. In the Steiner tree problem in graphs , we look for a network connecting terminals and hubs such that there is exactly one unique path between any two terminals and the total distance spanned by the optical fibers is minimum.
Maximum clique problem
Consider the global friendship network where pairs of people are considered to be either friends or not. In the maximum clique problem , we seek to determine the largest set of people for which each pair of people in the set are mutual friends.
Knapsack problem
Consider a hiker who needs to pack a knapsack with a number of items to take along on a hike. The knapsack has a maximum weight capacity. Each item has a given weight and some utility to the hiker. If all of the items fit in the knapsack, the hiker packs them and goes off. However, the entire set of items may not fit in the knapsack and the hiker will need to determine which items to take. The knapsack problem consists in finding a subset of items with maximum total utility, among all sets of items that fit in the knapsack.
Traveling salesman problem
Consider a traveling salesman who needs to visit all cities in a given sales territory. The salesman must begin and end the tour in a given city and visit each other city in the territory exactly once. Since each city must be visited only once, a tour can be represented by a circular permutation of the cities. Assuming the distances between each pair of cities are known beforehand, the objective of the traveling salesman problem is to determine a permutation of the cities that minimizes the total distance traveled.
1.3 Exact vs. approximate methods
An exact method or optimization method for solving an optimization problem is one that is guaranteed to produce, in finite time, a global optimum for this problem and a proof of its optimality, in case one exists, or otherwise show that no feasible solution exists. Globally optimal solutions are often referred to as exact optimal solutions . Among the many exact methods for solving combinatorial optimization problems, we find algorithmic paradigms such as cutting planes , dynamic programming , backtracking , branch-and-bound (together with its variants and extensions, such as branch-and-cut and branch-and-price ), and implicit enumeration . Some of these paradigms can be viewed as tree search procedures, in the sense that they start from a feasible solution (which corresponds to the root of the tree) and carry out the search for the optimal solution by generating and scanning the nodes of a subtree of the solution space (whose nodes correspond to problem solutions).