Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CephFS in-tree provisioner to CSI driver migration #2924

Closed
4 tasks
humblec opened this issue Sep 1, 2021 · 40 comments
Closed
4 tasks

CephFS in-tree provisioner to CSI driver migration #2924

humblec opened this issue Sep 1, 2021 · 40 comments
Assignees
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status

Comments

@humblec
Copy link
Contributor

humblec commented Sep 1, 2021

Enhancement Description

Parent Enhancement #625

Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Sep 1, 2021
@humblec
Copy link
Contributor Author

humblec commented Sep 1, 2021

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 1, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 30, 2021
@humblec
Copy link
Contributor Author

humblec commented Jan 5, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 5, 2022
@humblec
Copy link
Contributor Author

humblec commented Jan 5, 2022

@Jiawei0227 would like to track this for 1.24 as alpha.

@Jiawei0227
Copy link
Contributor

@Jiawei0227 would like to track this for 1.24 as alpha.

SG! Thanks

@xing-yang
Copy link
Contributor

/milestone v1.24

@k8s-ci-robot k8s-ci-robot added this to the v1.24 milestone Jan 11, 2022
@gracenng gracenng added the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Jan 12, 2022
@hosseinsalahi
Copy link

hosseinsalahi commented Jan 21, 2022

Hello @humblec

v1.24 Enhancements team here.

Just checking in as we approach enhancements freeze on 18:00pm PT on Thursday Feb 3rd, 2022. This enhancement is targeting alpha for v1.24,

Here’s where this enhancement currently stands:

  • Updated KEP file using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable for this release
  • KEP has a test plan section filled out.
  • KEP has up to date graduation criteria.
  • KEP has a production readiness review that has been completed and merged into k/enhancements.

The status of this enhancement is marked as tracked. Please keep the issue description and the targeted stage up-to-date for release v1.24.
Thanks!

@humblec
Copy link
Contributor Author

humblec commented Jan 27, 2022

@encodeflush this is targetted as alpha in 1.24 and not beta, to be clear.

humblec added a commit to humblec/enhancements that referenced this issue Jan 27, 2022
- One-line PR description: Enable intree cephfs plugin migration with the help of migration translation lib
- Issue Link: kubernetes#2924
- Other comments:

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
humblec added a commit to humblec/enhancements that referenced this issue Jan 27, 2022
- One-line PR description: Enable intree cephfs plugin migration with the help of migration translation lib
- Issue Link: kubernetes#2924
- Other comments:

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
humblec added a commit to humblec/enhancements that referenced this issue Jan 27, 2022
- One-line PR description: Enable intree cephfs plugin migration with the help of migration translation lib
- Issue Link: kubernetes#2924
- Other comments:

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
humblec added a commit to humblec/enhancements that referenced this issue Jan 27, 2022
- One-line PR description: Enable intree cephfs plugin migration with the help of migration translation lib
- Issue Link: kubernetes#2924
- Other comments:

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
humblec added a commit to humblec/enhancements that referenced this issue Jan 28, 2022
- One-line PR description: Enable intree cephfs plugin migration with the help of migration translation lib
- Issue Link: kubernetes#2924
- Other comments:

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
rikatz pushed a commit to rikatz/enhancements that referenced this issue Feb 1, 2022
- One-line PR description: Enable intree cephfs plugin migration with the help of migration translation lib
- Issue Link: kubernetes#2924
- Other comments:

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
@didicodes
Copy link

Hi @humblec, 1.24 Docs shadow here. 👋

This enhancement is marked as Needs Docs for the 1.24 release.

Please follow the steps detailed in the documentation to open a PR against the dev-1.24 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday 31st March 2022, 18:00 PDT.

Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.

Thank you! 🙌

@humblec
Copy link
Contributor Author

humblec commented Mar 9, 2022

The in-tree cephfs only support inline volume source model ( only mounter inteface) at present, that said, provision and delete implementation is missing or unknown to this driver.

https://github.com/kubernetes/examples/blob/master/volumes/cephfs/cephfs.yaml

I am wondering while we come up with migration for this plugin, what could/would be the expectation?

  • Is it supposed to be only mounter (mount/unmount) support after migration has been switched on [1]? or
  • can we think about provision/delete with migration ON, but directed to CSI driver ?

One thing to note here is, this plugin/driver dont have a SC support atm in in-tree, so bringing up the same to support provision/delete support with migration may not make much sense as it can be directly on CSI provisioner, instead of taking the route through the in-tree.

If the expectation is just to support "mounter" for existing volumes, do we anticipate any issues with the lack of implementation this driver currently has ?

I would like to request your input before taking this further. @Jiawei0227 @msau42 @jsafrane @xing-yang

[1] Even for mounter support , there are some glitches, which I am still exploring or trying to figure out.

@jsafrane
Copy link
Member

jsafrane commented Mar 9, 2022

IMO the first option makes more sense. Existing PVs should migrate silently to CSI and that's it. We should not add new features to in-tree volumes, even if they're migrated to CSI.

If anyone wants to use dynamic provisioning, they can keep using the existing external provisioner(s) (they should be deprecated, if we have such power) and users should be advised to use CSI driver for provisioning.

@humblec
Copy link
Contributor Author

humblec commented Mar 15, 2022

Thanks @jsafrane etl.al for sharing the thoughts! Let me check further on this. 👍

@hosseinsalahi
Copy link

hosseinsalahi commented Mar 21, 2022

Hello @humblec

I'm just checking in once more as we approach the 1.24 Code Freeze on 18:00 PDT, Tuesday, March 29th 2022

Please ensure the following items are completed:

  • All PRs to the Kubernetes repo that are related to your enhancement are merged by the code freeze deadline.
  • Have a documentation placeholder PR open by 18:00 PDT, Thursday, March 31, 2022.

For note, the status of this enhancement is currently marked as tracked.
Thank you!

@humblec
Copy link
Contributor Author

humblec commented Mar 29, 2022

@Jiawei0227 @xing-yang @encodeflush eventhough I am close to completion of this feature, some more time is required for wrapping it completely which includes more validation of different scenarios, e2e testing..etc.. considering we are at last hour of code freeze, imo, its better to reconsider for next release. with that, I would like to request to untrack this feature for 1.24 release.

@ruheenaansari34
Copy link

Quick reminder - Enhancement freeze is 2 days away. If you are still looking to get this enhancement into v1.26, please plan to make the updates to the KEP yaml and README and get the PR merged.

@rhockenbury
Copy link

With #3430 merged, this is marked as Tracked for v1.26.

@marosset
Copy link
Contributor

Hi @humblec 👋,

Checking in once more as we approach 1.26 code freeze at 17:00 PDT on Tuesday 8th November 2022.

Please ensure the following items are completed:

  • All PRs to the Kubernetes repo that are related to your enhancement are linked in the above issue description (for tracking purposes).
  • All PRs are fully merged by the code freeze deadline.

For this enhancement, I did not see any k/k PRs linked to this issue.
Please plan to get PRs out for all k/k code so it can be merged up by code freeze.
If you do have k/k PRs open, please link them to this issue (and update the issue description).

As always, we are here to help should questions come up. Thanks!

@xing-yang
Copy link
Contributor

xing-yang commented Nov 4, 2022

Hi @marosset,

Just chatted with @humblec and confirmed that this is not targeting 1.26 any more. Please remove this from the tracking board. Thanks!

@xing-yang
Copy link
Contributor

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.26 milestone Nov 4, 2022
@xing-yang xing-yang removed the lead-opted-in Denotes that an issue has been opted in to a release label Nov 4, 2022
@rhockenbury
Copy link

/label tracked/no
/remove-label tracked/yes
/remove-label lead-opted-in

@k8s-ci-robot k8s-ci-robot added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Nov 9, 2022
@k8s-ci-robot
Copy link
Contributor

@rhockenbury: Those labels are not set on the issue: lead-opted-in

In response to this:

/label tracked/no
/remove-label tracked/yes
/remove-label lead-opted-in

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 9, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 10, 2023
@msau42
Copy link
Member

msau42 commented Apr 10, 2023

/reopen
/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot reopened this Apr 10, 2023
@k8s-ci-robot
Copy link
Contributor

@msau42: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 10, 2023
@Atharva-Shinde Atharva-Shinde removed the tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team label May 14, 2023
@carlory
Copy link
Member

carlory commented Jun 29, 2023

Hi @humblec , Please update the Enhancement Description because it's outdated.

BTW, the status of the KEP is withdrawn, so I think this issue can be closed.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@xing-yang
Copy link
Contributor

Closing this issue as CephFS in-tree provisioner has been deprecated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status
Projects
Status: Net New
Development

No branches or pull requests