Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ApplySet : kubectl apply --prune redesign and graduation strategy #3659

Closed
3 of 4 tasks
KnVerey opened this issue Nov 15, 2022 · 33 comments
Closed
3 of 4 tasks

ApplySet : kubectl apply --prune redesign and graduation strategy #3659

KnVerey opened this issue Nov 15, 2022 · 33 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status

Comments

@KnVerey
Copy link
Contributor

KnVerey commented Nov 15, 2022

Enhancement Description

Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.

/sig cli

@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Nov 15, 2022
@soltysh
Copy link
Contributor

soltysh commented Jan 12, 2023

/assign @KnVerey
/stage alpha
/milestone v1.27
/label lead-opted-in

@k8s-ci-robot k8s-ci-robot added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Jan 12, 2023
@k8s-ci-robot k8s-ci-robot added this to the v1.27 milestone Jan 12, 2023
@k8s-ci-robot k8s-ci-robot added the lead-opted-in Denotes that an issue has been opted in to a release label Jan 12, 2023
@marosset
Copy link
Contributor

Hello @KnVerey 👋, Enhancements team here.

Just checking in as we approach enhancements freeze on 18:00 PDT Thursday 9th February 2023.

This enhancement is targeting for stage alpha for v1.27 (correct me, if otherwise)

Here's where this enhancement currently stands:

  • KEP readme using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable for latest-milestone: v1.27
  • KEP readme has a updated detailed test plan section filled out
  • KEP readme has up to date graduation criteria
  • KEP has a production readiness review that has been completed and merged into k/enhancements.

For this enhancement, it looks like #3661 will address most of these requirements.
Please be sure to also:

The status of this enhancement is marked as at risk. Please keep the issue description up-to-date with appropriate stages as well.
Thank you!

@KnVerey
Copy link
Contributor Author

KnVerey commented Feb 9, 2023

@marosset I believe all of the requirements have been met with the merging of #3661 today!

@marosset
Copy link
Contributor

marosset commented Feb 9, 2023

This enhancement meets now meets all of the requirements to be tracked in v1.27.
Thanks!

@marosset marosset added the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Feb 9, 2023
@KnVerey KnVerey changed the title kubectl apply --prune redesign and graduation strategy ApplySet : kubectl apply --prune redesign and graduation strategy Mar 6, 2023
@KnVerey
Copy link
Contributor Author

KnVerey commented Mar 6, 2023

Docs placeholder PR: kubernetes/website#39818

@marosset
Copy link
Contributor

marosset commented Mar 8, 2023

Hi @KnVerey 👋,

Checking in as we approach 1.27 code freeze at 17:00 PDT on Tuesday 14th March 2023.

Please ensure the following items are completed:

  • All PRs to the Kubernetes repo that are related to your enhancement are linked in the above issue description (for tracking purposes).
  • All PRs are fully merged by the code freeze deadline.

Please let me know if there are any PRs in k/k I should be tracking for this KEP.

As always, we are here to help should questions come up. Thanks!

@LukeMwila
Copy link

Hi @KnVerey, I’m reaching out from the 1.27 Release Docs team. This enhancement is marked as ‘Needs Docs’ for the 1.27 release.

Please follow the steps detailed in the documentation to open a PR against dev-1.27 branch in the k/website repo. This PR can be just a placeholder at this time, and must be created by March 16. For more information, please take a look at Documenting for a release to familiarize yourself with the documentation requirements for the release.

Please feel free to reach out with any questions. Thanks!

@marosset
Copy link
Contributor

Unfortunately the implementation PRs associated with this enhancement have not merged by code-freeze so this enhancement is getting removed from the release.

If you would like to file an exception please see https://github.com/kubernetes/sig-release/blob/master/releases/EXCEPTIONS.md

/milestone clear
/remove-label tracked/yes
/label tracked/no

@k8s-ci-robot k8s-ci-robot added the tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team label Mar 15, 2023
@k8s-ci-robot k8s-ci-robot removed this from the v1.27 milestone Mar 15, 2023
@k8s-ci-robot k8s-ci-robot removed the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Mar 15, 2023
@KnVerey
Copy link
Contributor Author

KnVerey commented Mar 15, 2023

Hi @marosset they did make the release actually! You can see them here: https://github.com/orgs/kubernetes/projects/128/views/2. I will update the issue description. The only feature we were originally targeting as part of the first alpha that did not make it was kubectl diff support. The featureset in kubectl apply is complete as intended for this release.

@marosset
Copy link
Contributor

/milestonve v1.27
/label tracked/yes
/remove-label tracked/no

@k8s-ci-robot k8s-ci-robot added tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team and removed tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team labels Mar 15, 2023
@marosset
Copy link
Contributor

Hi @marosset they did make the release actually! You can see them here: https://github.com/orgs/kubernetes/projects/128/views/2. I will update the issue description. The only feature we were originally targeting as part of the first alpha that did not make it was kubectl diff support. The featureset in kubectl apply is complete as intended for this release.

@KnVerey I added this issue back into v1.27.
Thanks for linking all the PRs above!

@sftim
Copy link
Contributor

sftim commented Mar 20, 2023

BTW, nearly all the labels we register are using subdomains of kubernetes.io. This KEP is using *.k8s.io keys.

If you want to make life easier for end users, get an exception in to change the labels, before beta (ideally, before the alpha release). I know it's a bit later, but it looks like we missed that detail in earlier reviews.

See https://kubernetes.io/docs/reference/labels-annotations-taints/ for the list of registered keys that we use for labels and annotations.

@KnVerey
Copy link
Contributor Author

KnVerey commented Mar 24, 2023

/milestone v1.27

(there was a typo in the last attempt to apply this)

@k8s-ci-robot k8s-ci-robot added this to the v1.27 milestone Mar 24, 2023
@Sakalya
Copy link

Sakalya commented May 3, 2023

@KnVerey is there a way I can contribute to this ?

@KnVerey
Copy link
Contributor Author

KnVerey commented May 3, 2023

is there a way I can contribute to this ?

Yes, we'll have plenty of work to do on this for v1.28! Some of it still needs to be defined through KEP updates before it can be started though. Please reach out in the sig-cli channel on Kubernetes Slack.

@KnVerey
Copy link
Contributor Author

KnVerey commented May 4, 2023

/assign @justinsb

@uhthomas
Copy link

Hi!

I'm looking to use applysets and struggling to understand how to use them at the cluster scope.

The KEP seems to suggest that --applyset=namespace/some-namespace should be possible, though I don't believe it is as the source code seems to explicitly only allow configmaps, secrets and CRDs. See the example:

kubectl apply -n myapp --prune --applyset=namespaces/myapp -f .

My use case is that I apply a big v1/List with everything in it.

apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: ConfigMap
  data: {}
- ...

I get this error:

$ /usr/local/bin/kubectl --kubeconfig= --cluster= --context= --user= apply --server-side --applyset=automata --prune -f -
error: namespace is required to use namespace-scoped ApplySet

@uhthomas
Copy link

uhthomas commented May 11, 2023

So, I ended up making a custom resource specifically for the ApplySet, but actually getting it to work is tricky.

kubectl can't create the custom resource

So, unlike with ConfigMaps and Secrets, kubectl cannot create the custom resource.

error: custom resource ApplySet parents cannot be created automatically

Missing tooling annotation

The annotation applyset.kubernetes.io/tooling must be set to kubectl/v1.27.1:

error: ApplySet parent object "applysets.starjunk.net/automata" already exists and is missing required annotation "applyset.kubernetes.io/tooling"

Missing ApplySet ID label

So, now I have to replicate this by hand?...

Sure, here's a go.dev/play.

error: ApplySet parent object "applysets.starjunk.net/automata" exists and does not have required label applyset.kubernetes.io/id

Missing contains-group-resources annotation

The value of this annotation will be tedious to replicate by hand. Fortunately, it can be blank.

error: parsing ApplySet annotation on "applysets.starjunk.net/automata": kubectl requires the "applyset.kubernetes.io/contains-group-resources" annotation to be set on all ApplySet parent objects

Server-Side conflicts

It looks like because I had to create those fields manually, and I did so with server-side apply, there are now conflicts which need to be resolved. The fix is to defermanagement of those fields to kubectl, see here.

error: Apply failed with 1 conflict: conflict with "kubectl-applyset": .metadata.annotations.applyset.kubernetes.io/tooling
statefulset.apps/vault serverside-applied
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts

After that was all said and done, it looks like this now works as expected!

https://github.com/uhthomas/automata/actions/runs/4942497931

I really hope my feedback is helpful. Let me know if there's anything I can do to help.

@uhthomas
Copy link

Also, not sure if it's relevant but there are lots of warnings of throttling.

I0511 00:16:48.924009    2333 request.go:696] Waited for 1.199473039s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/apps/v1/namespaces/vault/statefulsets?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:16:59.124001    2333 request.go:696] Waited for 11.398419438s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/objectbucket.io/v1alpha1/namespaces/media/objectbucketclaims?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:09.124063    2333 request.go:696] Waited for 21.397443643s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/apps/v1/namespaces/vault-csi-provider/daemonsets?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:19.124416    2333 request.go:696] Waited for 31.397017627s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/secrets-store.csi.x-k8s.io/v1/namespaces/vault-csi-provider/secretproviderclasses?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:29.324390    2333 request.go:696] Waited for 41.596456299s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/ceph.rook.io/v1/namespaces/node-feature-discovery/cephobjectstores?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:39.524221    2333 request.go:696] Waited for 51.795742479s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/crdb.cockroachlabs.com/v1alpha1/namespaces/rook-ceph/crdbclusters?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:49.723903    2333 request.go:696] Waited for 1m1.994821913s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/api/v1/namespaces/mimir/serviceaccounts?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:17:59.724367    2333 request.go:696] Waited for 1m11.994827196s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/api/v1/namespaces/grafana-agent-operator/services?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:09.924321    2333 request.go:696] Waited for 1m22.194264157s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/rbac.authorization.k8s.io/v1/namespaces/snapshot-controller/roles?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:20.123847    2333 request.go:696] Waited for 1m32.393300823s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/monitoring.grafana.com/v1alpha1/namespaces/immich/logsinstances?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:30.124466    2333 request.go:696] Waited for 1m42.393384018s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/rbac.authorization.k8s.io/v1/namespaces/cert-manager/rolebindings?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:40.324001    2333 request.go:696] Waited for 1m52.592402588s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/apps/v1/namespaces/snapshot-controller/deployments?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:18:50.324112    2333 request.go:696] Waited for 2m2.59200303s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/ceph.rook.io/v1/namespaces/rook-ceph/cephfilesystems?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1
I0511 00:19:00.[523](https://github.com/uhthomas/automata/actions/runs/4942497931/jobs/8836077430#step:8:524)616    2333 request.go:696] Waited for 2m12.790972393s due to client-side throttling, not priority and fairness, request: GET:https://tailscale-operator.tailnet-fbec.ts.net/apis/policy/v1/namespaces/vault-csi-provider/poddisruptionbudgets?labelSelector=applyset.kubernetes.io%2Fpart-of%3Dapplyset-xjZyH1FmMYtP-oSkfLUgubxDYIbsrD-IuDRLmezicIo-v1

@uhthomas
Copy link

uhthomas commented May 11, 2023

This may also be worth thinking about: spotahome/redis-operator#592. In some cases, it can lead to data loss. I'm not sure if this is any worse than the original implementation of prune, to be fair.

@btrepp
Copy link

btrepp commented May 14, 2023

I think the examples list namespaces as current potential apply set parents, however at the moment the tooling doesn't allow that. The examples say errors that it isn't allowed. Mainly I thought this might be a natural place for a very declarative approach. E.g the apply set covers the entire namespace, add to the apply set to add more resources.

I think also, while I completely understand and agree with 'apply set should only change one namespace' in practice this makes it a bit tricky as common tools do seem to span multiple namespaces quite often. E.g cert-manager/cert-manager#5471. For cert-manager I usually patch it to not affected kube-system, but it gets confusing quickly :).

So from above, I've pretty quickly hit the 'now I have to create a my own CRD', to have the additional namespaces capability.
I think if namespaces where allowed to be parents (and as they appear to be more cluster scoped) then they could span multiple namespaces, (but one is the parent/managing one), that would improve UX.

It also appears that a namespace parent (e.g a secret) can't span multiple namespaces, so if you do need to change 2x namespaces, you need a cluster resource anyway.

Despite some understandable alpha hiccups, it's actually pretty use-able! though. I'd say best UX at the moment is to heavily use it with Kustomize, so you can wrangle other software into working with it.

@Atharva-Shinde Atharva-Shinde removed this from the v1.27 milestone May 14, 2023
@Atharva-Shinde Atharva-Shinde removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team lead-opted-in Denotes that an issue has been opted in to a release labels May 14, 2023
@schlichtanders
Copy link

schlichtanders commented Oct 9, 2023

@btrepp @uhthomas I would like to transition to applysets, however face the namespace problem - you seem to have created custom CRDs which could be used as a applyset. Unfortunately I couldn't find the respective resources.

Do you or someone else know of plug and play applyset CRDs which can be used for seamless cluster-wide pruning?

EDIT: @uhthomas, I found this commit by you which seems to suggest that you could successfully simplicy the setup by using some kubectl commands. Unfortunately I couldn't find the corresponding commands. Can you help? Asked separately below

@uhthomas
Copy link

uhthomas commented Oct 9, 2023

@schlichtanders I believe this comment should have everything you need? Let me know if there's more I can do to help.

#3659 (comment)

@schlichtanders
Copy link

@uhthomas, I found this commit by you which seems to suggest that you could successfully simplified the setup by using some kubectl commands. Unfortunately I couldn't find the corresponding commands. Can you help?

@uhthomas
Copy link

uhthomas commented Oct 9, 2023

@schlichtanders To be clear, there are no kubectl commands which simplify this setup. You must create a CRD and custom resource as expalined in my other comment. You then must follow what I've written to create the appropriate annotations and labels for the custom resource, which can be removed later as kubectl will take over. The only command which is run for all of this is KUBECTL_APPLYSET=true kubectl apply --server-side --force-conflicts --applyset=applyset/automata --prune -f list.json.

@schlichtanders
Copy link

schlichtanders commented Oct 9, 2023

thank you thomas for the clarification 🙏

I now compiled my applyset.yaml as follows from the help of your comment:

# for details on the annotation see https://kubernetes.io/docs/reference/labels-annotations-taints/
# the applyset.kubernetes.io/id is depending on the group, however kubectl will complain and show you the correct id to use anyway

apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
  name: "applysets.jolin.io"
  labels:
    applyset.kubernetes.io/is-parent-type: "true"
spec:
  group: "jolin.io"
  names:
    kind: "ApplySet"
    plural: "applysets"
  scope: Cluster
  versions:
  - name: "v1"
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: "object"

---

apiVersion: "jolin.io/v1"
kind: "ApplySet"
metadata:
  name: "applyset"
  annotations:
    applyset.kubernetes.io/tooling: "kubectl/1.28"
    applyset.kubernetes.io/contains-group-resources: ""
  labels:
    applyset.kubernetes.io/id: "applyset-TFtfhJJK3oDKzE2aMUXgFU1UcLI0RI8PoIyJf5F_kuI-v1"

I need to deploy the above yaml first
(EDIT: I need to repeat this a couple of times, because the second yaml part requires the first crd part to be available, which takes a moment)

kubectl apply --server-side --force-conflicts -f applyset.yaml

and can then run kubectl with applyset, similar to how you mentioned:

KUBECTL_APPLYSET=true kubectl apply --server-side --force-conflicts --applyset=applyset.jolin.io/applyset --prune -f my-k8s-deployment.yaml

Seems to work so far 🥳

Note:

  • I am using jolin.io as my group, change it to your group
  • if done so, kubectl will complain that the applyset.kubernetes.io/id label is not as expected and will thankfully output the correct id (which is a hash of something also including the group)

For more up-to-date information on all the annotation, see https://kubernetes.io/docs/reference/labels-annotations-taints/

@uhthomas
Copy link

uhthomas commented Oct 9, 2023

Glad you were able to get it working.

I also mentioned this in my original comment, but the ID is generated here and can be generated in-browser with this program I wrote. Good to know it tells you what it should be anyway, so I guess trial and error works too.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 30, 2024
@armingerten
Copy link

The only feature we were originally targeting as part of the first alpha that did not make it was kubectl diff support. The featureset in kubectl apply is complete as intended for this release.

Is there already a timeline when ApplySets will be supported by kubectl diff? There also seems to be another stale issue about this: kubernetes/kubectl#1435

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@btrepp
Copy link

btrepp commented Apr 20, 2024

Is this really not planned now?. That's kinda disappointing, it was a really good feature and was looking forward to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status
Projects
Status: Tracked
Status: Closed
Development

No branches or pull requests