Validated Numerics
Validated Numerics
A Short Introduction to Rigorous Computations
Warwick Tucker
PRINCETON UNIVERSITY PRESS
PRINCETON AND OXFORD
Copyright 2011 by Princeton University Press
Published by Princeton University Press, 41 William Street,
Princeton, New Jersey 08540
In the United Kingdom: Princeton University Press,
6 Oxford Street, Woodstock, Oxfordshire OX20 1TW
All Rights Reserved
Library of Congress Cataloging-in-Publication Data
Tucker, Warwick, 1970
Validated numerics: a short introduction to rigorous computations / Warwick Tucker.
p. cm.
Includes bibliographical references and index.
ISBN 978-0-691-14781-9 (hardcover: alk. paper)
1. Numerical calculationsVerification. 2. ScienceData processing. I. Title.
QA76.95.T83 2011
502.85dc22 2010040560
British Library Cataloging-in-Publication Data is available
This book has been composed in Times
Printed on acid-free paper.
press.princeton.edu
Typeset by S R Nova Pvt Ltd, Bangalore, India
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
It seems a pity, nevertheless, that mathematical rigour
should have to be abandoned precisely at the point when a
problem is reduced to arithmetic.
RAMON E. MOORE, Introduction to Algebraic Problems
(Topics in Interval Analysis, Oxford University Press, 1969)
Contents
Preface
Since the creation of the digital computer, numerical computations have played an increasingly fundamental role in modeling physical phenomena for science and engineering. With regards to computing speed and memory capacity, the early computers seem almost amusingly crude compared to their modern counterparts. Not surprisingly, todays students find it hard to believe that serious scientific work could be produced using such crude devices. True enough, the computations were rudimentary, and a significant portion of the total work took place before and after the computer did its share. Nevertheless, real-world problems were solved, and the speed-up due to the use of machines pushed the frontier of feasible computing tasks forward. Through a myriad of small developmental increments, we have now reached the production of Peta-flop/Peta-byte computers an incredible feat which must have seemed completely unimaginable fifty years ago.
Despite this tremendous development in performance, very little has changed in the way computers actually perform their calculations. This state of affairs has led us to the rather awkward position where we can perform formidable computing tasks at very high speed, but where we do not have the capability to judge the validity of the final results. The question Are we just getting the wrong answers faster? is therefore a valid one, albeit slightly unkind to the great community of scientific computing, which has provided us with excellent numerical methods throughout the years.
Due to the inherent limitations of any finite-state machine, numerical computations are almost never carried out in a mathematically precise manner. As a consequence, they do not produce exact results, but rather approximate values that usually, but far from always, are near the true ones. In addition to this, external influences, such as an over-simplified mathematical model or a discrete approximation of the same, introduce additional inaccuracies into the calculations. As a result, even an seemingly simple numerical algorithm is virtually impossible to analyze with regards to its accuracy. To do so would involve taking into account every single floating point operation performed throughout the entire computation. It is somewhat amazing that a program performing only two floating point operations can be challenging to analyze! At speeds of one billion operations per second, any medium-sized program is clearly out of reach. This is a particularly valid point for complex systems, which require enormous models and very long computer runs. The grand example in this setting is weather prediction, although much simpler systems display the same kind of inaccessibility.
Fortunately, there are computational models in which approximate results are automatically provided with guaranteed error bounds. The simplest such model interval analysis was developed by Ramon Moore in the 1960s, see [Mo66]. At the time, however, computers were still at an early stage of development, and the additional costs associated with keeping track of the computational errors were deemed as too high. Furthermore, without special care in formulating the numerical algorithms, the produced error bounds would inevitably become overly pessimistic, and therefore quite useless.
Today, the development of interval methods has reached a high level of sophistication: tight error bounds can be produced in many cases even faster than nonrigorous computations can provide an approximation. As a testament to this, several highly non-trivial results in pure mathematics have recently been proved using computer-aided methods based on such interval techniques, see e.g. [Ha95], [Tu02], and [GM03]. We have now reached the stage where we can demand rigor as well as speed from our numerical computations. In light of this, it is clear that the future development of scientific computation must include techniques for performing validated numerics. It is also clear that mathematicians have been given a new, powerful research tool for challenging, non-linear problems.
This, in a nutshell, is the topic of the book before you.
Introduction
WHAT IS VALIDATED NUMERICS?
Validated numerics is the field aiming at bridging the gap between scientific computing and pure mathematics between speed and reliability.
One strain of the field aims at pushing the frontiers of computer-aided proofs in mathematical analysis. This area of research deals with problems that cannot be solved by traditional mathematical methods. Typically, such problems have a global component (e.g. a large search space) as well as a non-linear ingredient (the output is not proportional to the input). These problems have traditionally been studied through numerical simulations, and therefore our knowledge of these lack the rigour demanded by a formal proof.
Validated numerics aims to bridge the gap between a numerically observed phenomenon, and its mathematical counter-part. This is achieved by developing a means of computing numerically yet with mathematical rigour. Validated numerics merges pure mathematics with scientific computing in a novel way: instead of computing approximations to sought quantities, the aim is to compute enclosures of the same. The width of such an enclosure gives a direct quality measurement of the computation, and can be used to adaptively improve the calculations at run-time. This fundamental change of focus results in efficient and extremely fault-tolerant numerical methods, ideal for the systematic study of complex systems. As such, validated numerics can play an instrumental role as it is the only way to certify that a numerical computation meets required error tolerances. As computer simulations are gradually replacing physical experiments, this type of certification is of utmost importance.
Many challenging problems share the same type of inaccessibility due to non-linearities affecting the global behaviour of the systems. Neither tools from pure mathematics nor scientific computation alone have been successful in establishing quantitative information for such complex systems. The field of validated numerics aims not only to develop the mathematical foundation needed to overcome these obstacles, but also to produce concrete numerical methods able to provide mathematical statements about such systems.