Rahul Shishodia
KubernetesSSISConnect!
  • Kubernetes
    • Components
    • Architecture
    • Kubernetes Overview and Key Benefits
    • Minikube and kubectl
    • Commands
    • YAML Configuration
  • MongoDB Deployment
  • Ingress
  • Networking
  • SQL Server Integration Services
    • Overview
  • SQL Server Data Tools
  • SSDT Installation Troubleshooting
  • Control Flow
Powered by GitBook
On this page
  • Overview of Kubernetes Cluster
  • Key Terminology
  • Worker Nodes
  • Key Processes on Worker Nodes
  • How Worker Nodes Work Together
  • Communication Between Pods
  • Master Nodes
  • Key Processes on Master Nodes
  • Master Node Communication with Worker Nodes
  • Cluster Configuration and Scaling
  • Typical Cluster Setup
  • Resource Distribution
  • Cluster Scaling
  • Resilience and Reliability
  • Cluster Health and Failures
  • Adding New Nodes
  1. Kubernetes

Architecture

Overview of Kubernetes Cluster

  • Kubernetes Framework: A powerful yet complex system for managing and orchestrating containers.

  • Key Components:

    • Master nodes (control plane)

    • Worker nodes (perform actual work by running application pods)

Key Terminology

  • Pod: A group of one or more containers with shared storage and network.

  • Node: A physical or virtual machine in the Kubernetes cluster.


Worker Nodes

Key Processes on Worker Nodes

  1. Container Runtime

    • Responsible for running containers within the pods.

    • Example: Docker or other container technologies.

  2. Kubelet

    • The agent that ensures the containers are running in a pod.

    • Assigns resources (CPU, RAM, storage) to the containers.

    • Communicates with the container runtime to manage pods.

  3. Kube Proxy

    • Manages the network communication between services and pods.

    • Forwards requests intelligently to the correct pod based on factors like local node requests.

    • Reduces network overhead by forwarding requests within the same node when possible.

How Worker Nodes Work Together

  • Cluster Composition:

    • Multiple worker nodes with container runtimes, kubelet, and kube proxy.

    • Pods (like app and database) are distributed across the worker nodes.

Communication Between Pods

  • Service Layer:

    • A load balancer forwards requests to the appropriate pod.

    • Helps distribute traffic and ensures pods communicate efficiently.


Master Nodes

Key Processes on Master Nodes

  1. API Server

    • The gateway for interacting with the Kubernetes cluster.

    • Handles requests for scheduling, updates, and querying cluster status.

    • Validates requests and ensures only authenticated and authorized actions are executed.

    • Example: When a user wants to deploy a new application, they interact with the API server.

  2. Scheduler

    • Determines which worker node will host a new pod.

    • Considers available resources (CPU, RAM, etc.) and node capacity.

    • Requests to the kubelet on the chosen node to start the pod.

    • Ensures balanced resource allocation across the cluster.

  3. Controller Manager

    • Monitors the state of the cluster and handles recovery when necessary.

    • Detects pod failures and reschedules or restarts pods as required.

    • Works in conjunction with the scheduler to maintain the cluster's state.

  4. Etcd (Key-Value Store)

    • Stores the entire cluster's state.

    • Acts as the "brain" of the cluster.

    • Every change in the cluster (e.g., pod creation or deletion) is saved here.

    • Does not store application data (e.g., database content) but keeps track of the configuration and state information of the cluster.

Master Node Communication with Worker Nodes

  • API Server interacts with the scheduler and controller manager to manage pods and other resources across worker nodes.

  • Etcd ensures the cluster state is consistently stored and shared across all master nodes.


Cluster Configuration and Scaling

Typical Cluster Setup

  • Small Cluster Example:

    • 2 master nodes

    • 3 worker nodes

    • Master nodes handle control plane processes; worker nodes handle the workload.

Resource Distribution

  • Master Node Hardware:

    • Needs fewer resources (CPU, RAM, storage) since it handles control tasks.

  • Worker Node Hardware:

    • Requires more resources as it performs the actual processing of running pods.

Cluster Scaling

  • Scaling Up:

    • New worker nodes and master nodes can be added to the cluster easily.

    • Add new master nodes to ensure high availability.

    • Add new worker nodes as application demands and resource requirements increase.


Resilience and Reliability

Cluster Health and Failures

  • Controller Manager ensures the cluster maintains operational status even when a pod or worker node fails.

  • Etcd ensures that state data is replicated across master nodes for fault tolerance.

Adding New Nodes

  • Adding Worker Nodes:

    • New servers are set up with the necessary processes (container runtime, kubelet, kube proxy) and added to the cluster.

  • Adding Master Nodes:

    • New servers are set up with the master processes (API server, scheduler, controller manager, etc.) and integrated into the cluster.

    • Master nodes handle the management of the entire cluster.

PreviousComponentsNextKubernetes Overview and Key Benefits