Problem Statement
In Part Three, I established that one of the most important jobs of a platform team is to define the contract between developer intent and platform implementation.
That idea is useful in theory, but it becomes much clearer when you turn it into actual APIs that developers can use.
This post does exactly that. Instead of stopping at concepts, we will implement two simple internal platform contracts on AKS using kro:
| Contract | What the developer wants | What the platform abstracts/hides |
|---|---|---|
CostOptimizedApp |
Deploy a stateless app with the smallest sensible footprint | Deployment and internal Service defaults |
HighlyAvailableApp |
Deploy an internal app with stronger availability defaults | Three replicas, zone-spread rules, internal Service, and PodDisruptionBudget defaults |
This Part Four keeps the scope deliberately simple. It focuses on internal workload APIs only.
Solution
The walkthrough below uses one zonal AKS cluster in westus2, installs kro, and then creates two generated APIs that application teams can consume.
Before You Start
You need the following tools available locally:
- Azure CLI
kubectlhelm
There is one infrastructure caveat to keep in mind:
- The second contract encodes zone-spread intent with
topology.kubernetes.io/zone, so this walkthrough creates a zonal AKS cluster to make that behavior visible.
Shared Environment
Set a few environment variables up front so the commands are easier to follow:
export RESOURCE_GROUP="rg-kro-simple-demo"
export CLUSTER_NAME="aks-kro-simple-demo"
export LOCATION="westus2"
export NAMESPACE="platform-demo"Step 1: Create a Zonal AKS Cluster in westus2
Create the resource group and cluster, then download credentials:
az group create \
--name "$RESOURCE_GROUP" \
--location "$LOCATION"
az aks create \
--resource-group "$RESOURCE_GROUP" \
--name "$CLUSTER_NAME" \
--location "$LOCATION" \
--node-count 3 \
--zones 1 2 3 \
--generate-ssh-keys
az aks get-credentials \
--resource-group "$RESOURCE_GROUP" \
--name "$CLUSTER_NAME" \
--overwrite-existing
kubectl get nodes -L topology.kubernetes.io/zoneThe point of the cluster setup is simple: the second contract will ask Kubernetes to spread replicas across zones, so the backing cluster should expose zone labels for the scheduler to work with.
Step 2: Install kro
Install kro using the official Helm chart and create a namespace for the example workloads:
helm install kro oci://registry.k8s.io/kro/charts/kro \
--namespace kro-system \
--create-namespace
helm list -n kro-system
kubectl get pods -n kro-system
kubectl create namespace "$NAMESPACE" --dry-run=client -o yaml | kubectl apply -f -Once the kro pod is running, you are ready to create platform APIs.
Step 3: Contract 1 - Cost-Optimized Internal Application
The first contract represents a very common platform request: "deploy this stateless app with a minimal footprint." The developer should not need to think about Deployments and Services. They should only need a small API.
At a high level, the first contract looks like this:
%%{init: {"theme": "base", "themeVariables": {"primaryColor": "#1565C0", "primaryTextColor": "#ffffff", "primaryBorderColor": "#0D47A1", "lineColor": "#455A64", "secondaryColor": "#E3F2FD", "tertiaryColor": "#E8F5E9"}}}%%
flowchart LR
dev[Developer Team]:::team
platformTeam[Platform Team]:::owner
contract[RGD Contract: CostOptimizedApp]:::contract
kro[kro Controller]:::platform
deploy[Deployment]:::workload
service[Internal Service]:::workload
app[Running Internal App]:::result
dev -->|consumes| contract
platformTeam -->|defines| contract
contract --> kro
kro --> deploy
kro --> service
deploy --> app
service --> app
classDef team fill:#E3F2FD,stroke:#1565C0,color:#0D47A1
classDef owner fill:#F3E5F5,stroke:#4A148C,color:#4A148C
classDef contract fill:#E8F5E9,stroke:#2E7D32,color:#1B5E20
classDef platform fill:#1565C0,stroke:#0D47A1,color:#ffffff
classDef workload fill:#FFF8E1,stroke:#E65100,color:#E65100
classDef result fill:#ECEFF1,stroke:#607D8B,color:#263238Apply the ResourceGraphDefinition below:
kubectl apply -f - <<'EOF'
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
name: cost-optimized-app
spec:
schema:
apiVersion: v1alpha1
kind: CostOptimizedApp
spec:
image: string | default="nginx:1.27"
port: integer | default=80
resources:
- id: deployment
template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${schema.metadata.name}
namespace: ${schema.metadata.namespace}
spec:
replicas: 1
selector:
matchLabels:
app: ${schema.metadata.name}
template:
metadata:
labels:
app: ${schema.metadata.name}
spec:
containers:
- name: app
image: ${schema.spec.image}
ports:
- containerPort: ${schema.spec.port}
- id: service
template:
apiVersion: v1
kind: Service
metadata:
name: ${schema.metadata.name}
namespace: ${schema.metadata.namespace}
spec:
selector: ${deployment.spec.selector.matchLabels}
ports:
- port: ${schema.spec.port}
targetPort: ${schema.spec.port}
EOFVerify that the new API exists:
kubectl get rgd cost-optimized-app -o wide
kubectl api-resources --api-group=kro.run | grep CostOptimizedApp
kubectl get costoptimizedapps -Akro follows the normal Kubernetes pattern here: the generated kind is CamelCase (CostOptimizedApp), the object metadata.name stays lowercase (demo-cost-app), and the resource name you use with kubectl get is lowercase plural (costoptimizedapps).
Only continue once the RGD shows STATE as Active and READY as True. That means kro has generated and registered the API.
Now create an instance of that API:
kubectl apply -f - <<'EOF'
apiVersion: kro.run/v1alpha1
kind: CostOptimizedApp
metadata:
name: demo-cost-app
namespace: platform-demo
spec:
image: nginx:1.27
EOFVerify the result:
kubectl get costoptimizedapps -n "$NAMESPACE"
kubectl get deploy,svc -n "$NAMESPACE"
kubectl port-forward -n "$NAMESPACE" service/demo-cost-app 8080:80In another terminal, test the app:
curl -I http://127.0.0.1:8080/This is the point of the first contract. The developer created one simple object, and the platform translated that into a Deployment and internal Service.
Step 4: Contract 2 - Highly Available Internal Application
The second contract is still simple, but it encodes better operational defaults. Here the developer wants an internal app with stronger availability characteristics, and the platform wants to make those defaults consistent.
This contract uses:
- three replicas by default
- topology spread constraints for better distribution across zones
- an internal
Service - a
PodDisruptionBudget
At a high level, the second contract looks like this:
%%{init: {"theme": "base", "themeVariables": {"primaryColor": "#1565C0", "primaryTextColor": "#ffffff", "primaryBorderColor": "#0D47A1", "lineColor": "#455A64", "secondaryColor": "#E3F2FD", "tertiaryColor": "#E8F5E9"}}}%%
flowchart LR
dev[Developer Team]:::team
platformTeam[Platform Team]:::owner
contract[RGD Contract: HighlyAvailableApp]:::contract
kro[kro Controller]:::platform
deploy[Three-Replica Deployment]:::workload
service[Internal Service]:::workload
pdb[PodDisruptionBudget]:::workload
app[Running HA Internal App]:::result
dev -->|consumes| contract
platformTeam -->|defines| contract
contract --> kro
kro --> deploy
kro --> service
kro --> pdb
deploy --> app
service --> app
pdb --> app
classDef team fill:#E3F2FD,stroke:#1565C0,color:#0D47A1
classDef owner fill:#F3E5F5,stroke:#4A148C,color:#4A148C
classDef contract fill:#E8F5E9,stroke:#2E7D32,color:#1B5E20
classDef platform fill:#1565C0,stroke:#0D47A1,color:#ffffff
classDef workload fill:#FFF8E1,stroke:#E65100,color:#E65100
classDef result fill:#ECEFF1,stroke:#607D8B,color:#263238Apply the second ResourceGraphDefinition:
kubectl apply -f - <<'EOF'
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
name: highly-available-app
spec:
schema:
apiVersion: v1alpha1
kind: HighlyAvailableApp
spec:
image: string | default="nginx:1.27"
port: integer | default=80
resources:
- id: deployment
template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${schema.metadata.name}
namespace: ${schema.metadata.namespace}
spec:
replicas: 3
selector:
matchLabels:
app: ${schema.metadata.name}
template:
metadata:
labels:
app: ${schema.metadata.name}
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: ${schema.metadata.name}
containers:
- name: app
image: ${schema.spec.image}
ports:
- containerPort: ${schema.spec.port}
- id: service
template:
apiVersion: v1
kind: Service
metadata:
name: ${schema.metadata.name}
namespace: ${schema.metadata.namespace}
spec:
selector: ${deployment.spec.selector.matchLabels}
ports:
- port: ${schema.spec.port}
targetPort: ${schema.spec.port}
- id: pdb
template:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: ${schema.metadata.name}
namespace: ${schema.metadata.namespace}
spec:
minAvailable: 2
selector:
matchLabels: ${deployment.spec.selector.matchLabels}
EOFVerify that the generated API is available:
kubectl get rgd highly-available-app -o wide
kubectl api-resources --api-group=kro.run | grep HighlyAvailableApp
kubectl get highlyavailableapps -AThe same naming rule applies here: HighlyAvailableApp is the kind, demo-ha-app is the object name, and highlyavailableapps is the resource name used by kubectl get.
As with the first contract, wait until the RGD is Active and Ready before creating an instance.
Now create an instance:
kubectl apply -f - <<'EOF'
apiVersion: kro.run/v1alpha1
kind: HighlyAvailableApp
metadata:
name: demo-ha-app
namespace: platform-demo
spec:
image: nginx:1.27
EOFStep 5: Verify the Highly Available Contract
First, check the workload resources that the contract created:
kubectl get highlyavailableapps -n "$NAMESPACE"
kubectl get deploy,svc,pdb -n "$NAMESPACE"
kubectl get pods -n "$NAMESPACE" -l app=demo-ha-app -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
kubectl get nodes -o custom-columns=NAME:.metadata.name,ZONE:.metadata.labels.topology\\.kubernetes\\.io/zone
kubectl describe pdb demo-ha-app -n "$NAMESPACE"You should see three running pods, an internal Service, and a PodDisruptionBudget with minAvailable: 2.
You can also port-forward the Service and verify the app response:
kubectl port-forward -n "$NAMESPACE" service/demo-ha-app 8081:80In another terminal:
curl -I http://127.0.0.1:8081/This is the point of the second contract. The developer still created one simple object, but the platform translated it into a three-replica Deployment with zone-spread intent and a disruption budget.
A Note on Zone Spreading
The second contract encodes topology spread constraints using topology.kubernetes.io/zone. In this walkthrough, I create a zonal AKS cluster so the scheduler has the information it needs to distribute replicas.
If you reuse a non-zonal cluster, the contract still expresses the right intent, but actual placement may not span zones. I use ScheduleAnyway here so the example remains portable even when perfect zonal balance is not possible.
Summary
This post takes the contract idea from Part Three and turns it into something operational.
With kro, the platform team can define a low-friction API for a cost-optimized internal app and another low-friction API for an internal app with stronger availability defaults. The important point is that the developer only sees a few lines of YAML, even though the platform is doing much more underneath.
For the first contract, the developer-facing API is just this:
apiVersion: kro.run/v1alpha1
kind: CostOptimizedApp
metadata:
name: demo-cost-app
namespace: platform-demo
spec:
image: nginx:1.27That small object expands into the platform defaults for a Deployment and internal Service.
For the second contract, the developer-facing API is still small:
apiVersion: kro.run/v1alpha1
kind: HighlyAvailableApp
metadata:
name: demo-ha-app
namespace: platform-demo
spec:
image: nginx:1.27That object expands into a three-replica Deployment, an internal Service, topology-spread rules, and a PodDisruptionBudget.
That is the real value of the platform contract: not hiding Kubernetes completely, but concentrating its complexity behind APIs that are stable, understandable, and aligned with developer intent.