Spring Boot on GCP
  • Introduction
  • Getting Started
    • Google Cloud Platform
    • Cloud Shell
    • gcloud CLI
    • Hello World!
      • Cloud Shell
      • App Engine
      • Cloud Run
      • Kubernetes Engine
      • Compute Engine
      • Cloud Functions
  • Application Development
    • Development Tools
    • Spring Cloud GCP
    • Cloud Services
      • Databases
        • Cloud SQL
        • Cloud Spanner
        • Cloud Firestore
          • Datastore Mode
          • Native Mode
      • Messaging
        • Cloud Pub/Sub
        • Kafka
      • Secret Management
      • Storage
      • Cache
        • Memorystore Redis
        • Memorystore Memcached (beta)
      • Other Services
    • Observability
      • Trace
      • Logging
      • Metrics
      • Profiling
      • Debugging
    • DevOps
      • Artifact Repository
  • Deployment
    • Runtime Environments
    • Container
      • Container Image
      • Secure Container Image
      • Container Awareness
      • Vulnerability Scanning
      • Attestation
    • Kubernetes
      • Kubernetes Cluster
      • Deployment
      • Resources
      • Service
      • Health Checks
      • Load Balancing
        • External Load Balancing
        • Internal Load Balancing
      • Scheduling
      • Workload Identity
      • Binary Authorization
    • Istio
      • Getting Started
      • Sidecar Proxy
  • Additional Resources
    • Code Labs
    • Presentations / Videos
    • Cheat Sheets
Powered by GitBook
On this page
  • Default Configuration
  • Resource Request
  • Resource Limit

Was this helpful?

  1. Deployment
  2. Kubernetes

Resources

Learn how to assign CPU/memory resources to your containerized application.

PreviousDeploymentNextService

Last updated 4 years ago

Was this helpful?

This section continues from the previous section - make sure you do the tutorial in sequence.

Default Configuration

You can specify the computing resource needs for each of the containers. By default, each container is given 10% of a CPU and no memory use restrictions.

The defaults can cause issues:

  • If a Node has 1 full CPU, then Kubernetes may schedule up to 10 instances of the same container, which may overload the system.

  • If a Node has 16GB of RAM, and without memory restriction, then each container instance (JVM) may think they each can use up to 16GB, causing memory overuse (and thus, virtual memory swapping, etc)

You can see the current resource by describing a Pod instance, look for the Requests/Limits lines.

POD_NAME=$(kubectl get pods -lapp=helloworld -o jsonpath='{.items[0].metadata.name}')

kubectl describe pod $POD_NAME

The details should have a Requests section with cpu value set to 100m:

Name:           helloworld-...
Namespace:      default...
Containers:
  helloworld:
    ...
    Requests:
      cpu:  100m
...

The default value is 100m, which means 100 milli = 100/1000 = 10%of a vCPU core.

The default is configured per Namespace. The application was deployed into the default Namespace. Look at the default resource configuration for this Namespace:

kubectl describe ns default

See the output:

Name:         default
Labels:       <none>
Annotations:  <none>
Status:       Active

Resource Quotas
 Name:                       gke-resource-quotas
 Resource                    Used  Hard
 --------                    ---   ---
 count/ingresses.extensions  1     100
 count/jobs.batch            0     5k
 pods                        3     1500
 services                    2     500

Resource Limits
 Type       Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
 ----       --------  ---  ---  ---------------  -------------  -----------------------
 Container  cpu       -    -    100m             -              -

However, the configuration is actually stored in a LimitRange Kubernetes resource:

kubectl get limitrange limits -oyaml

Resource Request

In Kubernetes, you can reserve capacity by setting the Resource Requests to reserve more CPU and memory. Configure the deployment to reserve at least 20% of a CPU, and 128Mi of RAM.

k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: helloworld
  name: helloworld
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helloworld
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - image: gcr.io/.../helloworld
        name: helloworld
        # Add the resources requests block
        resources:
          requests:
            cpu: 200m
            memory: 128Mi

In this example, CPU request is 200m which means 200 milli=200/1000 = 20% of 1 vCPU core.

Memory is 128Mi, which is 128 Mebibytes = ~134 Megabytes.

When specifying the Memory resource allocation, do not accidentally use m as the unit. 128m means 0.128 bytes.

Resource Limit

The application can consume more CPU and memory than requested - it can burst up to the limit, but cannot exceed the limit. Configure the deployment to set the limit:

k8s/service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: helloworld
  name: helloworld
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helloworld
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - image: gcr.io/.../helloworld
        name: helloworld
        # Add the resources requests block
        resources:
          requests:
            cpu: 200m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 256Mi

CPU limit is a compressible resource. If the application exceeds the CPU limit, it'll simply be throttled, and thus capping the latency and throughput.

Memory is not a compressible resource. If the application exceeds the Memory limit, then the container will be killed (OOMKilled) and restarted.

The default can be updated. See documentation.

See documentation for the units descriptions such as m, M, and Mi.

For Java applications, read the section to make sure you are using a Container-Aware OpenJDK version to avoid unnecessary OOMKilled errors.

Deployment
Configure Default CPU Requests and Limits for a Namespace
Kubernetes Resource Units
Container Awareness