Introducing Istio Service Mesh for Microservices
by Burr Sutter and Christian Posta
Copyright 2019 OReilly Media. All rights reserved.
Printed in the United States of America.
Published by OReilly Media, Inc. , 1005 Gravenstein Highway North, Sebastopol, CA 95472.
OReilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com .
- Editors: Chris Guzikowski and Eleanor Bru
- Production Editor: Deborah Baker
- Copyeditor: Kim Cofer
- Proofreader: Matthew Burgoyne
- Interior Designer: David Futato
- Cover Designer: Karen Montgomery
- Illustrator: Rebecca Demarest
- March 2019: Second Edition
Revision History for the Second Edition
- 2019-03-19: First Release
The OReilly logo is a registered trademark of OReilly Media, Inc. Introducing Istio Service Mesh for Microservices, the cover image, and related trade dress are trademarks of OReilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
This work is part of a collaboration between OReilly and Red Hat. See our statement of editorial independence.
978-1-492-05260-9
[LSI]
Chapter 1. Introduction
If you are looking for an introduction into the world of Istio, the service mesh platform, with detailed examples, this is the book for you. This book is for the hands-on application architect and development team lead focused on cloud native applications based on the microservices architectural style. This book assumes that you have had hands-on experience with Docker, and while Istio will be available on multiple Linux container orchestration solutions, the focus of this book is specifically targeted at Istio on Kubernetes/OpenShift. Throughout this book, we will use the terms Kubernetes and OpenShift interchangeably. (OpenShift is Red Hats supported distribution of Kubernetes.)
If you need an introduction to Java microservices covering Spring Boot and Thorntail (formerly known as WildFly Swarm), check out Microservices for Java Developers (OReilly), by Christian Posta.
Also, if you are interested in reactive microservices, an excellent place to start is Building Reactive Microservices in Java (OReilly), by Clement Escoffier, as it is focused on Vert.x, a reactive toolkit for the Java Virtual Machine.
In addition, this book assumes that you have a comfort level with Kubernetes/OpenShift; if that is not the case, OpenShift for Developers (OReilly), by Grant Shipley and Graham Dumpleton, is an excellent ebook on that very topic. We will be deploying, interacting, and configuring Istio through the lens of OpenShift; however, the commands well use are mostly portable to vanilla Kubernetes as well.
To begin, we discuss the challenges that Istio can help developers solve and then describe Istios primary components.
The Challenge of Going Faster
The software development community, in the era of digital transformation, has embarked on a relentless pursuit of better serving customers and users. Todays digital creatorsapplication programmershave not only evolved into faster development cycles based on Agile principles, but are also in pursuit of vastly faster deployment times. Although the monolithic code base and resulting application might be deployable at the rapid clip of once a month or even once a week, it is possible to achieve even greater to production velocity by breaking up the application into smaller units with smaller team sizes, each with its independent workflow, governance model, and deployment pipeline. The industry has defined this approach as microservices architecture.
Much has been written about the various challenges associated with microservices as it introduces many teams, for the first time, to the fallacies of distributed computing. The number one fallacy is that the network is reliable. Microservices communicate significantly over the networkthe connection between your microservices. This is a fundamental change to how most enterprise software has been crafted over the past few decades. When you add a network dependency to your application logic, you have invited in a whole host of potential hazards that grow proportionally if not exponentially with the number of connections your application depends on.
Understandably, new challenges arise in moving from a single deployment every few months to (potentially) dozens of software deployments every week or even every day.
Some of the big web companies had to develop special frameworks and libraries to help alleviate some of the challenges of an unreliable network, ephemeral cloud hosts, and many code deployments per day. For example, companies like Netflix created projects like Ribbon, Hystrix, and Eureka to solve these types of problems. Others such as Twitter and Google ended up doing similar things. These frameworks that they created were very language and platform specific and, in some cases, made it difficult to bring in new application services written in programming languages that didnt have support from these resilience frameworks. Whenever these frameworks were updated, the applications also needed to be updated to stay in lock step. Finally, even if they created an implementation of these frameworks for every possible permutation of language runtime, theyd have massive overhead in trying to apply the functionality consistently. At least in the Netflix example, these libraries were created in a time when the virtual machine (VM) was the main deployable unit and they were able to standardize on a single cloud platform plus a single application runtime, the Java Virtual Machine. Most companies cannot and will not do this.
The advent of the Linux container (e.g., Docker) and Kubernetes/OpenShift have been fundamental enablers for DevOps teams to achieve vastly higher velocities by focusing on the immutable image that flows quickly through each stage of a well-automated pipeline. How development teams manage their pipeline is now independent of the language or framework that runs inside the container. OpenShift has enabled us to provide better elasticity and overall management of a complex set of distributed, polyglot workloads. OpenShift ensures that developers can easily deploy and manage hundreds, if not thousands, of individual services. Those services are packaged as containers running in Kubernetes pods complete with their respective language runtime (e.g., Java Virtual Machine, CPython, and V8) and all their necessary dependencies, typically in the form of language-specific frameworks (e.g., Spring or Express) and libraries (e.g., jars or npms). However, OpenShift does not get involved with how each of the application components, running in their individual pods, interact with one another. This is the crossroads where architects and developers find ourselves. The tooling and infrastructure to quickly deploy and manage polyglot services is becoming mature, but were missing similar capabilities when we talk about how those services interact. This is where the capabilities of a service mesh such as Istio allow you, the application developer, to build better software and deliver it faster than ever before.