Skip to main content

Kubernetes Clusters

Kubernetes Cluster Interface

Scalable container orchestration for production workloads.

Overview

The Kubernetes Cluster service automates container scheduling, scaling, networking, and service discovery. Each cluster includes control plane components and worker nodes. Supports public and private clusters, auto-scaling, persistent storage, load balancing, and monitoring.


Prerequisites

  • Cloud console or CLI access
  • Virtual network/subnet configured
  • (Optional) SSH access for debugging worker nodes
  • Container images in accessible registry

Step 1: Create Cluster

  1. Go to Kubernetes > Clusters
  2. Click Create Cluster
  3. Configure:
SettingDescription
Cluster namee.g., prod-cluster-01
Region/ZoneDeployment location
Kubernetes versionSelect version
Network/SubnetPod and service communication
  1. Choose cluster type:

    • Public: Accessible via public endpoint
    • Private: Internal network or VPN only
  2. Configure node pool:

    • Instance type and size
    • Node count (min/max for auto-scaling)
  3. Click Create Cluster

Provisioning takes several minutes.


Step 2: Configure Node Pools

  • Navigate to Node Pools within your cluster
  • Add pools for different workloads (general, GPU, memory-optimized)
  • Enable auto-scaling based on resource utilization
  • Set taints and labels for pod scheduling control

Step 3: Connect to Cluster

  1. Download kubeconfig from dashboard
  2. Set config as active:
export KUBECONFIG=~/Downloads/kubeconfig.yaml
kubectl get nodes

Worker nodes should show Ready state.


Step 4: Deploy Application

kubectl create deployment webapp --image=nginx:latest
kubectl expose deployment webapp --port=80 --type=LoadBalancer

This creates a Deployment managing NGINX pods and a Service exposing them via external load balancer.


Step 5: Configure Networking

ComponentDescription
Service TypesClusterIP, NodePort, LoadBalancer
Ingress ControllerNGINX or Traefik for HTTP routing
Network PoliciesControl pod-to-pod communication
Private NetworkingInternal subnets and load balancers

Step 6: Add Persistent Storage

  1. Go to Storage > Volumes or use PVCs
  2. Create PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
  1. Mount PVC in Pod spec for persistent data

Step 7: Monitoring and Logs

  • View CPU, memory, pod status in metrics dashboard
  • Stream pod logs:
kubectl logs <pod-name>
  • Configure autoscaling:
    • HPA: Horizontal Pod Autoscaler for load-based scaling
    • Cluster Autoscaler: Node-level scaling

Step 8: Updates and Maintenance

  • Upgrade Kubernetes from dashboard (control plane first, then nodes)
  • Rotate credentials and API tokens periodically
  • Use rolling updates:
kubectl rollout status deployment/webapp
  • Backup manifests and secrets regularly

Features Summary

FeatureDescription
Managed Control PlaneHA Kubernetes API servers and etcd
Node PoolsWorker groups with specific instance types
Auto-ScalingDynamic node adjustment
Public/PrivateChoose security and exposure level
Load BalancingBuilt-in L4/L7 load balancers
Persistent StorageBlock and file storage
Network PoliciesFine-grained traffic control
MonitoringResource metrics and alerting
UpgradesSeamless version upgrades
RBACRole-based access control

Troubleshooting

IssueCauseSolution
Pods stuck in PendingNo nodes or quota exceededScale up node pool
kubectl timeoutAPI unreachableCheck endpoint and kubeconfig
LoadBalancer not provisionedNo public IP poolAllocate IPs or contact admin
Volume mount failsPVC not boundVerify PVC and storage class
Autoscaling not triggeringMetrics missingInstall metrics server