Kubernetes
The ultimate beginners guide to effectively learn Kubernetes step-by-step
Mark Reed
Copyright 2019 - All rights reserved.
It is not legal to reproduce, duplicate, or transmit any part of this document in either electronic means or in printed format. Recording of this publication is strictly prohibited and any storage of this document is not allowed unless with written permission from the publisher except for the use of brief quotations in a book review.
Table of Contents
Introduction
The history of computer science can be characterized by the development of abstractions that aim at reducing complexity and empowering people to create more sophisticated applications. However, the development of scalable and reliable applications is still more challenging than it should be. To reduce the complexity, containers and container orchestration APIs such as Kubernetes have been introduced as crucial abstractions that radically simplify the development of scalable and reliable systems. Though orchestrators and containers are in the process of being absorbed into the mainstream, they do a great job at enabling the developers to create and deploy applications with agility, reliability, and, above all, speed.
Kubernetes has become the de-facto platform used for deploying and managing cloud native applications. Evidently, the adoption of Kubernetes has developed to be used in more complex and mission-critical applications. As such, enterprise operations teams ought to be conversant with Kubernetes to effectively manage any challenges that may arise. Note that developer experience, operator experience, and multi-tenancy are the core challenges that Kubernetes users encounter. Though the complexity of using and operating Kubernetes may be a huge concern, enterprises that manage to overcome the challenges enjoy different benefits, such as increased release frequencies, faster recovery from failures, quicker adoption of cloud technologies, and an improved customer experience that, in turn, offers a myriad of business advantages. The good news is that developers have the freedom to ensure faster innovations, while the operations teams ensure that resources are utilized efficiently and compliance is upheld.
This book focuses on explaining the management and design of Kubernetes clusters. As such, it covers in detail the capabilities and services that are provided by Kubernetes for both developers and daily users. The following chapters will take the reader through the deployment and application of Kubernetes, while taking into consideration the different user cases and environments. The reader will then gain extensive knowledge about how Kubernetes is organized, when it is best to apply various resources, and how clusters can be implemented and configured effectively. In the following chapters, you will gain an in-depth understanding of the Kubernetes architecture, the clusters, and how they are installed, how they operate, and how the software can be deployed through applying the best practices possible.
If you are new to Kubernetes, this book offers in-depth information that helps in understanding Kubernetes, its benefits, and why you need it. For instance, the following chapters give a detailed introduction to Kubernetes, containers, and development of containerized applications. It will further describe the Kubernetes cluster orchestrator and how tools and APIs are used to improve the delivery, development, and maintenance of distributed applications. You will understand how to move container applications into production by applying the best practices, and you will also learn how Kubernetes fits into your daily operations, ensuring that you prepare for production-ready container application stacks. This book aims at helping the reader comprehend the Kubernetes technology, along with educating on how to use the Kubernetes tooling efficiently and effectively, with the aim of developing and deploying apps to Kubernetes clusters.
Chapter One:
A Kubernetes Overview
What is Kubernetes?
Kubernetes is a Greek word used to define an open-source container orchestration system that is automated for deployment, scaling, and management. The first version of Kubernetes was released in July 2015 as a collaboration between Google and the Cloud Native Computing Foundation (CNCF). It makes it easier for a developer to package an application with the various elements it needs and finally come out as one package. For developers wanting to create more complex applications that require various elements involving multiple containers and machines, Kubernetes is an ideal solution. It can help application elements to restart and move across various systems as they are required to. It serves as the basic framework that allows users to choose the different frameworks, instruments, and the language, among other tools they may prefer. Even though Kubernetes is not a platform service tool, they still form a good basis for the development of these applications. Kubernetes is designed to help solve and offer modern application infrastructure solutions.
Its main unit of organization is called a pod . A pod is a group of containers that are taken as a group on a machine, and has the ability to communicate with one another easily. Each pod has a unique IP address, thereby ensuring that different applications can use similar ports without the risk of conflict. All containers using the ports can recognize each other as local hosts and can therefore correlate. These pods are organized into a service that works together to become a system of labels to store metadata in Kubernetes. These parts then create a systematic and consistent way to give predefined instructions via a command line center. A pod can define the volume of a network disk and relay the information to the containers in the pod. Pods are easily managed by setting up a control system, thus ensuring they are working properly.
Kubernetes does not limit the types of applications that are supported, and its main objective is to give support to a wide variety of workloads. Its important that people understand that Kubernetes does not give source codes, nor does it build the workload; instead, it determines the workflow of the application development and the technical requirements. Kubernetes does not do command logging, monitoring, or alerting solutionsit gives an integration to a mechanism that collects and exports metrics. It does not give any comprehensive configurations or management systems, which is left to the developer to ensure they create a good, self-healing system to protect their application. Kubernetes creates a no-need environment for orchestration and gives the user a continuous, current state that will eventually lead to the desired state. It makes it easier for users to move from the first step to the last step, thus avoiding the need to have an orchestration system. This makes Kubernetes an easy, efficient, powerful and resilient system. The ecosystem provided by Kubernetes ensures that there is a micro-based implementation to address the concerns of microservice. The efficiency of Kubernetes makes it the best in terms of information technology, and is widely used by many developers to ensure that their applications and software tools are efficient and flawless.
Kubernetes also has replica sets that ensure swift maintenance of the number of containers that have been assigned to one pod. A selector , in this case, is used to help the replica sets work properly; that is, it helps in good identification of the pods that are associated with every container. This sorts out what pods to add to which containers, and which to reduce or maintain. Kubernetes, being a multi-tier application, offers two modes of service discovery, beginning by first assigning stable IP addresses and DNS names to the services. These ensure that there is no traffic in the network and that the IP addresses match with those of the selector. In case of any defaults, the service is exposed inside or outside the cluster, depending on the load of traffic in the cluster network. Ample storage is readily available in the Kubernetes containers, since every pod restart tends to clear data on these containers. This, therefore, gives the pod enough space to serve them for a lifetime, even though this space can still be shared with other pods. The pod configuration determines the location at which every volume is mounted. Different containers can then mount their volume at different locations in the cluster network.