Skip to main content

2017 | Buch

Kubernetes Management Design Patterns

With Docker, CoreOS Linux, and Other Platforms

insite
SUCHEN

Über dieses Buch

Take container cluster management to the next level; learn how to administer and configure Kubernetes on CoreOS; and apply suitable management design patterns such as Configmaps, Autoscaling, elastic resource usage, and high availability. Some of the other features discussed are logging, scheduling, rolling updates, volumes, service types, and multiple cloud provider zones. The atomic unit of modular container service in Kubernetes is a Pod, which is a group of containers with a common filesystem and networking. The Kubernetes Pod abstraction enables design patterns for containerized applications similar to object-oriented design patterns. Containers provide some of the same benefits as software objects such as modularity or packaging, abstraction, and reuse.
CoreOS Linux is used in the majority of the chapters and other platforms discussed are CentOS with OpenShift, Debian 8 (jessie) on AWS, and Debian 7 for Google Container Engine.
CoreOS is the main focus becayse Docker is pre-installed on CoreOS out-of-the-box. CoreOS: Supports most cloud providers (including Amazon AWS EC2 and Google Cloud Platform) and virtualization platforms (such as VMWare and VirtualBox)
Provides Cloud-Config for declaratively configuring for OS items such as network configuration (flannel), storage (etcd), and user accounts
Provides a production-level infrastructure for containerized applications including automation, security, and scalability
Leads the drive for container industry standards and founded appc
Provides the most advanced container registry, Quay
Docker was made available as open source in March 2013 and has become the most commonly used containerization platform. Kubernetes was open-sourced in June 2014 and has become the most widely used container cluster manager. The first stable version of CoreOS Linux was made available in July 2014 and since has become one of the most commonly used operating system for containers.
What You'll Learn
Use Kubernetes with Docker
Create a Kubernetes cluster on CoreOS on AWS
Apply cluster management design patterns
Use multiple cloud provider zones
Work with Kubernetes and tools like Ansible
Discover the Kubernetes-based PaaS platform OpenShift
Create a high availability website
Build a high availability Kubernetes master cluster
Use volumes, configmaps, services, autoscaling, and rolling updates
Manage compute resources
Configure logging and scheduling


Who This Book Is For
Linux admins, CoreOS admins, application developers, and container as a service (CAAS) developers. Some pre-requisite knowledge of Linux and Docker is required. Introductory knowledge of Kubernetes is required such as creating a cluster, creating a Pod, creating a service, and creating and scaling a replication controller. For introductory Docker and Kubernetes information, refer to Pro Docker (Apress) and Kubernetes Microservices with Docker (Apress). Some pre-requisite knowledge about using Amazon Web Services (AWS) EC2, CloudFormation, and VPC is also required.

Inhaltsverzeichnis

Frontmatter

Platforms

Frontmatter
Chapter 1. Kubernetes on AWS
Abstract
Kubernetes is a cluster manager for Docker (and rkt) containers. The Introduction outlines its basic architecture and relationship to CoreOS and Amazon Web Services (AWS). In this chapter we’ll spin up a basic cluster without configuration.
Deepak Vohra
Chapter 2. Kubernetes on CoreOS on AWS
Abstract
Kubernetes is usually used with a cloud platform, as the hardware infrastructure required for a multi-node Kubernetes cluster is best provisioned in a cloud environment.
Deepak Vohra
Chapter 3. Kubernetes on Google Cloud Platform
Abstract
Google Cloud Platform is a public cloud computing platform that includes database services and infrastructure on which applications and websites may be hosted on managed virtual machines. This integrated PaaS/IaaS is a collection of services that may be categorized into Compute, Storage and Databases, Networking, Big Data, and Machine Learning, to list a few.
Deepak Vohra

Administration and Configuration

Frontmatter
Chapter 4. Using Multiple Zones
Abstract
High availability in a Kubernetes cluster is implemented using various parameters. High availability of master controllers would provision multiple master controllers. High availability of etcd would provision multiple etcd nodes. High availability of public DNS would provision multiple public DNSes. In a cloud-native application, availability of a cluster would depend on the availability of the region or zone in which the nodes are run. AWS provides various high-availability design patterns, such as Multi Region Architecture, Multiple Cloud Providers, DNS Load Balancing Tier, and Multiple Availability Zones. In this chapter we will discuss the Multiple Availability Zones design pattern as implemented by Kubernetes. Amazon AWS availability zones are distinct physical locations with independent power, network and security and insulated from failures in other availability zones. Availability zones within the same region have low latency network connectivity between them.
Deepak Vohra
Chapter 5. Using the Tectonic Console
Abstract
Tectonic is a commercial enterprise Kubernetes platform providing enterprise-level security, scalability, and reliability. Tectonic provides an integrated platform based on Kubernetes and CoreOS Linux. The Tectonic architecture consists of Kubernetes cluster manager orchestrating rkt containers running on CoreOS. Tectonic provides Distributed Trusted Computing using cryptographic verification of the entire environment, from the hardware to the cluster. Tectonic enhances open source Kubernetes, and applications may be deployed between cloud and data center environments.
Deepak Vohra
Chapter 6. Using Volumes
Abstract
Kubernetes pods are invariably associated with data, and the data can either be made integral to a Docker container via its Docker image or decoupled from the Docker container.
Deepak Vohra
Chapter 7. Using Services
Abstract
A Kubernetes service is an abstraction serving a set of pods. The pods that a service defines or represents are selected using label selectors specified in the service spec. A service's label selector expression must be included in a pod's labels for the service to represent the pod. For example, if a service selector expression is "app=hello-world", a pod's labels must include the label "app=hello-world" for the service to route client traffic to the pod. A service is accessed at one or more endpoints provided by the service. The number of endpoints available is equal to the number of pod replicas for a deployment/replication controller. To be able to access a service outside its cluster, the service must be exposed at an external IP address. The ServiceType field defines how a service is exposed. By default a ServiceType is ClusterIP, which exposes the service only within the cluster and not at an external IP. The other ServiceTypes are NodePort and LoadBalancer, which expose the service at an external IP.
Deepak Vohra
Chapter 8. Using Rolling Updates
Abstract
It is common for a replication controller specification or a container image to be updated. If a replication controller is created from an earlier image or definition file, the replication controller will need to be updated.
Deepak Vohra
Chapter 9. Scheduling Pods on Nodes
Abstract
Scheduling involves finding the pods that need to be run and running (scheduling) them on nodes in a cluster.
Deepak Vohra
Chapter 10. Configuring Compute Resources
Abstract
Kubernetes’s resource model is simple, regular, extensible and precise. The Kubernetes container cluster manager provides two types of resources: compute resources and API resources. Supported compute resources (simply called “resources” in this chapter) are CPU and RAM (or memory). Support for other compute resources, such as network bandwidth, network operations, storage space, storage operations, and storage time may be added later.
Deepak Vohra
Chapter 11. Using ConfigMaps
Abstract
In Chapter 10 and some earlier chapters, we used the spec: containers: env: field to specify an environment variable for the Docker image mysql for the MySQL database.
Deepak Vohra
Chapter 12. Using Resource Quotas
Abstract
In Chapter 10 we introduced a resource consumption model based on requests and limits, using which resources (CPU and memory) are allocated to a pod’s containers.
Deepak Vohra
Chapter 13. Using Autoscaling
Abstract
Starting new pods may sometimes be required in a Kubernetes cluster, for example, to meet the requirements of an increased load. The replication controller has the ability to restart a container, which is actually starting a replacement container, if a container in a pod were to fail.
Deepak Vohra
Chapter 14. Configuring Logging
Abstract
Logging is the process of collecting and storing log messages generated by different components of a system (which would be a Kubernetes cluster) and by applications running on the cluster.
Deepak Vohra

High Availability

Frontmatter
Chapter 15. Using an HA Master with OpenShift
Abstract
A Platform as a Service (PaaS) is a cloud platform on which applications may be developed, run, and managed with almost no configuration as the platform provides the application infrastructure including networking, storage, OS, runtime middleware, databases, and other dependency services. Kubernetes is the most commonly used container cluster manager and can be used as the foundation for developing a PaaS. OpenShift is an example of a PaaS.
Deepak Vohra
Chapter 16. Developing a Highly Available Website
Abstract
In Chapter 4 we used multiple AWS availability zones to provide fault tolerance for failure of a zone. But a high-availability master was not used, and the single master is a single point of failure. In Chapter 15 we did use a high-availability master with OpenShift and Ansible, but the single elastic load balancer remains a single point of failure.
Deepak Vohra
Backmatter
Metadaten
Titel
Kubernetes Management Design Patterns
verfasst von
Deepak Vohra
Copyright-Jahr
2017
Verlag
Apress
Electronic ISBN
978-1-4842-2598-1
Print ISBN
978-1-4842-2597-4
DOI
https://doi.org/10.1007/978-1-4842-2598-1