New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale down a deployment by removing specific pods (PodDeletionCost) #2255
Comments
/sig apps |
@annajung @JamesLaverack james, you mentioned in the sig-apps slack channel that this enhancement is at risk, can you clarify why? it meets the criteria. |
@ahg-g Just to follow up here too, we discussed in Slack and this was due to a delay in reviewing. We've now marked this as "Tracked" on the enhancements spreadsheet for 1.21. Thank you for getting back to us. :) |
Hi @ahg-g, Since your Enhancement is scheduled to be in 1.21, please keep in mind the important upcoming dates:
As a reminder, please link all of your k/k PR(s) and k/website PR(s) to this issue so we can track them. Thanks! |
done. |
Hi @ahg-g Enhancements team is currently tracking the following PRs As this PR is merged, can we mark this enhancement complete for code freeze or do you have other PR(s) that are being worked on as part of the release? |
Hi @JamesLaverack , yes the k/k code is merged, docs PR still open though. |
/stage beta |
/milestone v1.22 |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
Hey all watching! After thinking more about how we can make I still need to write up a full proposal and KEP, but my initial thoughts can be found at: The gist of the idea is that we can make |
Many thanks for all your works and reflections on this subject, I'm strongly interest in the capacity to choose pods to be evicted during scale in and I try to follow up corresponding discussions and feature developments or proposals. I searched for a long time how to achieve this correctly, I was happy with the PodDeletionCost but now I am a little disappointed as it seems that it will stay a beta (please do not remove this feature until an equivalent one is released). My need (which is maybe different of yours) is to selectively evict or replace terminated pods to keep a dynamic number of fresh pod replicas without terminating potentially running pods (I mean pods running applications currently processing something). I may be wrong, but I think the root cause of the problem is the incompatibility between the automatic pod restart and the scale-in features. Without PodDeletionCost, one known workaround is to:
For me this workaround speaks in favor the incompatibility between ReplicaSet and scale-in features to select pods to be evicted : currently that cannot work when mixed together. Also I think one should avoid any controller to terminate a pod, it should be the application inside the pod that terminates, implying the its pod to terminate, then a controller could evict only already terminated pod. Here is my proposal :
With these behaviors, scale-in will select pods to be evicted based on the inside pod applications termination status (here Succeeded or Failed) instead of external indicators. If this proposal is acceptable and can work it is maybe achievable with a minimal coding effort. What do you think ? |
Maybe my need is different because I need to automatically replace or delete terminated pods. I think there are two cases to distinguish during scale-in: the capacity to remove terminated pods from replicaset (without replacing them, which imply a ReplicaSet restartPolicy different than Always), and the capacity to remove running pods (using the probe). |
/milestone clear |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@ahg-g I'm not in love with annotations as APIs. Do we REALLY think this is the best answer? |
I think we have a reasonable counter proposal in kubernetes/kubernetes#107598 (comment); can we hold this in its current beta state until that proposal makes progress? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
would the following be an acceptable design pattern?
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Enhancement Description
k/enhancements
) update PR(s): https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2255-pod-costk/k
) update PR(s): Implements pod deletion cost kubernetes#99163k/website
) update PR(s): ReplicaSet pod-deletion-cost annotation website#26739k/enhancements
) update PR(s): Promote PodDeletionCost to Beta #2619k/k
) update PR(s): Graduate PodDeletionCost to Beta kubernetes#101080, Integration test for pod deletion cost feature kubernetes#101003k/website
) update(s): PodDeletionCost to Beta website#28417Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
The text was updated successfully, but these errors were encountered: