Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graduate the kube-scheduler ComponentConfig to GA #785

Closed
luxas opened this issue Jan 30, 2019 · 111 comments
Closed

Graduate the kube-scheduler ComponentConfig to GA #785

luxas opened this issue Jan 30, 2019 · 111 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team wg/component-standard Categorizes an issue or PR as relevant to WG Component Standard.
Milestone

Comments

@luxas
Copy link
Member

luxas commented Jan 30, 2019

Enhancement Description

  • One-line enhancement description (can be used as a release note): Usage of the kube-scheduler configuration file has graduated from experimental, as the API version now is v1beta1
  • Primary contact (assignee): @alculquicondor
  • Responsible SIGs: @kubernetes/sig-scheduling-api-reviews @kubernetes/wg-component-standard
  • KEP: https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/785-scheduler-component-config-api
  • Link to e2e and/or unit tests:
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @bsalamat @k82cn
  • Approver (likely from SIG/area to which enhancement belongs): @bsalamat @k82cn
  • Enhancement target (which target equals to which milestone):
    • Alpha release target (x.y)
    • Beta release target (x.y) 1.19
    • Stable release target (x.y) 1.25

The kube-scheduler ComponentConfig is currently in v1alpha1. The spec needs to be graduated to v1beta1 and beyond in order to be usable widely.
/assign @bsalamat @k82cn

@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Jan 30, 2019
@luxas luxas added this to the v1.14 milestone Jan 30, 2019
@luxas luxas added the wg/component-standard Categorizes an issue or PR as relevant to WG Component Standard. label Jan 30, 2019
@liggitt liggitt added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Jan 31, 2019
@sttts
Copy link
Contributor

sttts commented Jan 31, 2019

We need a thorough API review of the kube-scheduler component config in the light of unifying them for all components. Moving what we have to beta in the 1.14 time frame feels like rushing in the light of us having just started with component-base.

@claurence
Copy link

@luxas I don't see a KEP for this issue links in the description. I'm removing it from the 1.14 milestone - to have it added back please file an exception request.

@kacole2
Copy link
Contributor

kacole2 commented Apr 11, 2019

@luxas I'm the enhancement lead for 1.15. I don't see a KEP filed for this enhancement and per the guidelines, all enhancements will require one. Please let me know if this issue will have any work involved for this release cycle and update the original post reflect it. Thanks!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 10, 2019
@mtaufen
Copy link
Contributor

mtaufen commented Jul 11, 2019

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 11, 2019
@fejta-bot
Copy link

Enhancement issues opened in kubernetes/enhancements should never be marked as frozen.
Enhancement Owners can ensure that enhancements stay fresh by consistently updating their states across release cycles.

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 11, 2019
@mrbobbytables
Copy link
Member

Hey there @luxas, I'm one of the 1.16 Enhancement Shadows. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.

Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.

As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.

Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.

Thank you.

@mrbobbytables mrbobbytables added the tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team label Jul 15, 2019
@bsalamat bsalamat added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 19, 2019
@bsalamat
Copy link
Member

@mrbobbytables given that this issue does not have an owner and we didn't plan it for 1.16 in SIG scheduling, I doubt if this can make it to 1.16, but given that we still have another ~10 days before enhancement freeze, we may still be able to squeeze this in. If I find an owner for this, I will update it and will let you know.
Let's remove this from 1.16 for now.

@jfbai
Copy link

jfbai commented Jul 20, 2019

@bsalamat I’d like to work on this if you have not assign to anyone.

@bsalamat
Copy link
Member

@jfbai Please go ahead and send a KEP as soon as possible. The KEP must be merged by the end of July for this to be considered for 1.16.

@jfbai
Copy link

jfbai commented Jul 25, 2019

@jfbai Please go ahead and send a KEP as soon as possible. The KEP must be merged by the end of July for this to be considered for 1.16.

@bsalamat Are there any big changes against kube-scheduler configuraions when graduating to v1beta1? Or, just keep it same as v1alpha1?

According to Moving ComponentConfig API types to staging repos, ComponentConfig is to split component config to internal and external with version, and move external versioned config to staging/, so, my understanding is there is no structure changes will be introduced.

If I missed something important, could you please provide suggestion to accomplish this work, since I am lack of knowledge about related works.

@bsalamat
Copy link
Member

@bsalamat Are there any big changes against kube-scheduler configuraions when graduating to v1beta1? Or, just keep it same as v1alpha1?

Initially we didn't plan to make any changes, but now we should make sure that fields are optional as is proposed by: #1173

We need to wait for that KEP to be implemented. For that reason, we cannot promote the scheduler config to beta in 1.16.

@jfbai
Copy link

jfbai commented Jul 30, 2019

@bsalamat Are there any big changes against kube-scheduler configuraions when graduating to v1beta1? Or, just keep it same as v1alpha1?

Initially we didn't plan to make any changes, but now we should make sure that fields are optional as is proposed by: #1173

We need to wait for that KEP to be implemented. For that reason, we cannot promote the scheduler config to beta in 1.16.

I see, thanks a lot.

@gracenng gracenng added the tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team label Jan 9, 2022
@gracenng gracenng removed this from the v1.23 milestone Jan 9, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 9, 2022
@ahg-g
Copy link
Member

ahg-g commented Apr 9, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 9, 2022
@kerthcet
Copy link
Member

I'd like to take this as graduate CC to GA, mentioned by @alculquicondor before kubernetes/kubernetes#108444 (comment)

@kerthcet
Copy link
Member

kerthcet commented Jun 14, 2022

Tracking TODOs:

  • Updating samples in the website to use the v1 API.

@Priyankasaggu11929 Priyankasaggu11929 added this to the v1.25 milestone Jun 15, 2022
@Priyankasaggu11929 Priyankasaggu11929 added tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team and removed tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team labels Jun 15, 2022
@Priyankasaggu11929
Copy link
Member

Hello @kerthcet, @alculquicondor 👋, 1.25 Enhancements team here.

Just checking in as we approach enhancements freeze on 18:00 PST on Thursday June 23, 2022.

For note, This enhancement is targeting for stage stable for 1.25 (correct me, if otherwise)

Here's where this enhancement currently stands:

  • KEP file using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable
  • KEP has a updated detailed test plan section filled out
  • KEP has up to date graduation criteria
  • KEP has a production readiness review that has been completed and merged into k/enhancements.

For note, the status of this enhancement is marked as tracked. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

@Priyankasaggu11929 Priyankasaggu11929 added stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status and removed stage/beta Denotes an issue tracking an enhancement targeted for Beta status labels Jun 15, 2022
@Priyankasaggu11929
Copy link
Member

Priyankasaggu11929 commented Jul 21, 2022

Hello @kerthcet 👋

Checking in once more as we approach 1.25 code freeze at 01:00 UTC on Wednesday, 3rd August 2022.

Please ensure the following items are completed:

Please verify, if there are any additional k/k PRs besides the ones listed above.

Please plan to get the open PRs merged by the code freeze deadline. The status of the enhancement is currently marked as at-risk.

Also kindly update the issue description with the relevant links for tracking purposes. Thank you so much!

@Priyankasaggu11929
Copy link
Member

Priyankasaggu11929 commented Aug 1, 2022

With the code k/k PRs merged now, this enhancement is ready for the 1.25 code freeze.

(updated)
The status is now marked as tracked. Thank you!

Will wait for the following PR to merge as well before marking it as tracked:

@kerthcet
Copy link
Member

kerthcet commented Aug 1, 2022

FYI we have another related PR kubernetes/kubernetes#111547 here, just one log warning. @Priyankasaggu11929

@Priyankasaggu11929
Copy link
Member

Thanks so much for pointing @kerthcet. Have updated my comments above with the open PR ^ 🙂

@Priyankasaggu11929
Copy link
Member

With k/k PR kubernetes/kubernetes#111547 merged now, marking this enhancement as tracked for code freeze. Thank you so much @kerthcet.

@rhockenbury rhockenbury added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Sep 11, 2022
@kikisdeliveryservice
Copy link
Member

As the KEP has been marked as implemented, closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team wg/component-standard Categorizes an issue or PR as relevant to WG Component Standard.
Projects
None yet
Development

Successfully merging a pull request may close this issue.