Mark A Haidekker SpringerBriefs in Physics Medical Imaging Technology 2013 10.1007/978-1-4614-7073-1_1 The Author(s) 2013
1. Introduction
Medical imaging refers to several different technologies that are used to view the human body in order to diagnose, monitor, or treat medical conditions. Each type of technology gives different information about the area of the body being studied or treated, related to possible disease, injury, or the effectiveness of medical treatment.
This concise definition by the US Food and Drug Administration illuminates the goal of medical imaging: To make a specific condition or disease visible. In this context, visible implies that the area of interest is distinguishable in some fashion (for example, by a different shade or color) from the surrounding tissue and, ideally, from healthy, normal tissue. The difference in shade or color can be generalized with the term contrast .
The process of gathering data to create a visible model (i.e., the image) is common to all medical imaging technologies and can be explained with the simple example of a visible-light camera. The sample is probed with incident light, and reflected light carries the desired information. For example, a melanoma of the skin would reflect less light than the surrounding healthy skin. The camera lens collects some of the reflected light andmost importantlyfocuses the light onto the film or image sensor in such a way that a spatial relationship exists between the origin of the light ray and its location on the image sensor. The ability to spatially resolve a signal (in this example, light intensity) is fundamental to every imaging method. The ability to spatially resolve a signal can be fairly straightforward (for example, following an X-ray beam along a straight path) or fairly complex (for example in magnetic resonance imaging, where a radiofrequency signal is encoded spatially by its frequency and its phase).
In the next step of the process, the spatially resolved data are accumulated. Once again, the camera analogy is helpful. At the start of the exposure, the sensor array is reset. Over the duration of the exposure, incoming light creates a number of electrical charges that depends on the light intensity. At the end of the exposure, the charges are transferred from the sensor to a storage medium. From here, the image would typically be displayed in such a fashion that higher charge read-outs correspond to higher screen intensity. In the camera example, the relationship between reflected light intensity and displayed intensity is straightforward. In other cases, intensity relates to different physical properties. Examples include X-ray absorption (which gives X-ray images the characteristic negative appearance with bones appearing bright and air dark), concentration of a radioactively labeled compound, or the time it takes for a proton to regain its equilibrium orientation in a magnetic field.
The physical interpretation of image intensity is key to interpreting the image, and the underlying physical process is fundamental to achieving the desired contrast. As a consequence, the information encoded in the image varies fundamentally between image modalities and, in some cases (such as MRI), even within the same modality.
The image is evaluated by an experienced professional, usually a radiologist. Even in todays age of automated image analysis and computerized image understanding, the radiologist combines the information encoded in the image with knowledge of the patients symptoms and history and with knowledge of anatomy and pathology to finally form a diagnosis. Traditional viewing of film over a light box is still prominent, even with purely digital imaging modalities, although more and more radiologists make use of on-the-fly capabilities of the digital imaging workstation to view and enhance images. Furthermore, computerized image processing can help enhance the image, for example, by noise reduction, emphasizing edges, improving contrast, or taking measurements.
1.1 A Brief Historical Overview
X-rays were discovered in 1895. Within less than a decade, which is an astonishingly short time, X-ray imaging became a main-stream diagnostic procedure and was adopted by most major hospitals in Europe and the USA. At that time, sensitivity was low, and exposure times for a single image were very long. The biological effects of X-rays were poorly explored, and radiation burns were common in the early years of diagnosticand recreationalX-ray use. As the pernicious effects of ionizing radiation became better understood, efforts were made to shield operators from radiation and to reduce patient exposure. However, for half a century, X-ray imaging did not change in any fundamental fashion, and X-ray imaging remained the only way to provide images from inside the body.
The development of sonar (sound navigation and ranging) eventually led to the next major discovery in biomedical imaging: ultrasound imaging. After World War II, efforts were made, in part with surplus military equipment, to use sound wave transmission and sound echoes to probe organs inside the human body. Ultrasound imaging is unique in that image formation can take place with purely analog circuits. As such, ultrasound imaging was feasible with state-of-the-art electronics in the 1940s and 1950s (meaning: analog signal processing with vacuum tubes). Progress in medical imaging modalities accelerated dramatically with the advent of digital electronics and, most notably, digital computers for data processing. In fact, with the exception of film-based radiography, all modern modalities rely on computers for image formation. Even ultrasound imaging now involves digital filtering and computer-based image enhancement.
In 1972, Geoffrey Hounsfield introduced a revolutionary new device that was capable of providing cross-sectional, rather than planar, images with X-rays. He called the method tomography , from the Greek words to cut and to write [7]. The imaging modality is known as computed tomography (CT) or computer-aided tomography (CAT), and it was the first imaging modality that required the use of digital computers for image formation. CT technology aided the development of emission tomography, and the first CT scanner was soon followed by the first positron emission tomography scanner.
The next milestone, magnetic resonance imaging (MRI), was introduced in the late 1970s. MRI, too, relies on digital data processing, in part because it uses the Fourier transform to provide the cross-sectional image. Since then, progress became more incremental, with substantial advances in image quality and acquisition speed. The resolution and tissue discrimination of both CT and MRI, for example, that todays devices are capable of, was literally unthinkable at the time these devices were introduced. In parallel, digital image processing and the digital imaging workstation provided the radiologist with new tools to examine images and provide a diagnosis. Three-dimensional image display, multi-modality image matching, and preoperative surgery planning were made possible by computerized image processing and display.