Symptom:
We have a normal deployment which was working fine. When we test it on a new Kubernetes cluster, the deployment is created well, but the pod is not created. No warning or error messages.
"kubectl describe deployment" does not show any hints. Pod security policy check is good, RBAC privilege check is good.
OldReplicaSets: <none>
NewReplicaSet: livesqlstg-admin-678df959b4 (0/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 16s deployment-controller Scaled up replica set livesqlstg-admin-678df959b4 to 1
Solution:
The reason is we have resource quota implemented on the namespace.
spec:
hard:
configmaps: "10"
limits.cpu: "10"
limits.memory: 20Gi
persistentvolumeclaims: "10"
....
By doing that, we need an additional resource section in the deployment yaml file. ie
resources:
requests:
memory: "10Gi"
cpu: "1"
limits:
memory: "10Gi"
cpu: "1"
It would be good for Kubernetes to give users some warnings for that.