While it's easy to create a Kubernetes Engine cluster, it takes a bit more to provision a production-grade cluster. This cluster will enable many features for production use:
Workload Identity is the recommended way to access Google Cloud services from within GKE, so you can securely associate specific service account to a workload.
Allow Kubernetes Pod IP addresses to be natively routable on a VPC. Most importantly, it allows one-hop from Google Cloud Load Balancer to the Kubernetes Pod without unnecessary intermediary routing.
Allows you to monitor your running Google Kubenetes Engine clusters, manage your system and debug logs, and analyze your system's performance using advanced profiling and tracing capabilities.
Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
Automatically upgrade Google Kubernetes Engine nodes version to keep up to date with the cluster control plane version.
1
PROJECT_ID=$(gcloud config get-value project)
2
gcloud container clusters create demo-cluster \
3
--num-nodes 4\
4
--machine-type n1-standard-4 \
5
--network=default \
6
--workload-pool=${PROJECT_ID}.svc.id.goog \
7
--enable-ip-alias \
8
--enable-network-policy \
9
--enable-stackdriver-kubernetes \
10
--enable-binauthz \
11
--enable-shielded-nodes \
12
--shielded-secure-boot \
13
--enable-autorepair \
14
--enable-autoupgrade \
15
--scopes=cloud-platform
Copied!
These nodes will still have a public IP, and be able to access the public Internet. For most production clusters, you'll want to consider creating a Private Cluster, and control egress via Cloud NAT.
Credentials
Kubernetes credentials are automatically retrieved and stored in your $HOME/.kube/config file. If you need to re-retrieve the credential:
The Kubernetes cluster is composed of multiple Nodes - each node is a Compute Engine Virtual Machine. When you deploy a container image into Kubernetes, a container instance is ultimately scheduled and ran on one of the Nodes.
In Kubernetes Engine, theses nodes are managed by a Node Pool, which is a set of homogenous Compute Engine Virtual Machines (i.e., they have exactly the same configuration, such as machine type, disk, operation system, etc).
You can add different machine types to your Kubernetes Engine cluster, by creating a new Node Pool with the configuration you want.
You can see a list of Virtual Machines using gcloud:
1
gcloud compute instances list
Copied!
You can also use kubectl to list the nodes that belong to the current cluster:
1
kubectl get nodes
Copied!
You can also SSH into the node directly if needed, by specifying the name of the node: