Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publishing Kubernetes packages on community infrastructure #1731

Open
8 of 49 tasks
justaugustus opened this issue Apr 30, 2020 · 45 comments
Open
8 of 49 tasks

Publishing Kubernetes packages on community infrastructure #1731

justaugustus opened this issue Apr 30, 2020 · 45 comments
Assignees
Labels
sig/release Categorizes an issue or PR as relevant to SIG Release. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/out-of-tree Denotes an out-of-tree enhancement issue, which does not need to be tracked by the Release Team

Comments

@justaugustus
Copy link
Member

justaugustus commented Apr 30, 2020

Enhancement Description


Milestones and Tasks Checklist

Milestone 1.0—Code Deliverable

  • Success looks like: debs and rpms are in multiple locations (Google infra and third-party hosting service(s)) @detiber
    • Define User Scenarios and Requirements (Configuration, management, monitoring, etc.)
    • Describe proposed Flow (what are key actions that the user will take? How will they take them?)
    • Choose the third-party hosting service/Set up a package host of some sort @detiber, @leonardpahlke helping
      • Identified some options to explore
      • Outreach to reps of those options to see if they'll work with us on this WIP
    • Do a spike test to see if the third-party hosting service meets our needs @detiber, @leonardpahlke helping
    • Publish the debs/rpms to the third-party hosting service @detiber

Milestone 1.0—Documentation Deliverable

Milestone 1.0—Risk Mitigation

  • Success looks like: All risk matters for this milestone are accounted for, plans in place to mitigate
    • Risk: A new GPG key will be issued incompatible with the existing GPG key managed by Google. This is a breaking change for the existing infrastructures using DEB/RPM packages as installation artifacts

Milestone 1.0—Questions resolved

  • Resolve if this is needed: Give access to the sandbox env to release eng to work on it @ameukam, @onlydole helping
  • Develop sandbox env (GCP project) and give access to Rel Eng to work on it @ameukam, @onlydole helping
  • Needed more discussion: Meeting to specifically discuss if we are doing GPG or if we are owning the packages; then who owns new GPG keys and how and when we store them. @ameukam, @onlydole helping
  • Generate new keys : Sign kubernetes system packages with GPG release#2627
  • Decide on plan: Be able to issue new gcp trusted keys for entire ecosystem/whoever is relying on those packages to deploy/install/upgrade K8s project
  • Find/line up people who can build packages for debian, CentOS to work on build tooling (@upodroid to help build Debian packages)

Milestone 2.0—Code deliverable

  • Success looks like: implementation (source code) in our repositories, such that we can publish releases by ourselves without having to rely on Google people at all
    • Define User Scenarios and Requirements (Configuration, management, monitoring, etc.
    • Proposed Flow (what are key actions that the user will take? How will they take them?)
    • Refine the current debs/rpms built by kubepkg
    • Resume discussing how we rethink / rewrite the build tools
    • Rebase the existing kubepkg work into something krel can use directly
    • Krel, the kubernetes release tool, should start including a package build step running the build scripts
    • Phase out publishing to the google controlled host, for having krel publish to our new host for 1-2 releases
    • Users migrate from the Google registry to the community registry, adding the signing key

Milestone 2.0—Documentation Deliverable

  • Success looks like: we've published docs on how process works and when it's not working as expected, what to do (for release managers)
    • Expand documentation on the top-level script to drive knowledge transfer
    • Expand documentation on the package build scripts for Debian and RPM to drive knowledge transfer

Milestone 2.0—Questions resolved

  • Success looks like: @saschagrunert and @justaugustus have provided context and knowledge about the package build scripts for Debian and rpms
    • Answer: what type of infra we're going to run
    • Answer: who pays for it
    • Answer: what type of account do we have
    • Answer: what tools do/will we have available

Milestone 3.0—Documentation deliverable

  • Success looks like: We've updated the google build+publish+release script to download packages uploaded by the upstream release process

Milestone 4.0— Documentation Deliverable

  • Success looks like: We've published docs for end users about old infra and new/migrated (more static)
    • Write email to community regarding the required migration to the community repository
    • Send email to community regarding the required migration to the community repository
    • SIG Release informs the community so we don't break people over just one milestone: "these are the keys we'll use to release 1.2X..."

Why is this needed?

  • It's part of the effort for the community to fully own the infrastructure related to the Kubernetes project.
  • Ensure all aspects of project releases can be staffed across companies
  • We don’t want to be dependent on a Build Admin any longer (which tends to slow releases)
  • We seek more control to eventually choose to build packages for prereleases, extend what packages are shipped, etc.

Why is it needed now?

  • The existing infrastructure of this relies on a workstation inside Google offices—essentially one person. It’s not reliable or sustainable.

Who needs it? (User Personas WIP)

  • Google Build Team as the existing team building the system packages.
  • What does “done” look like/acceptance criteria?
  • Automated builds of deb and rpm Kubernetes packages within community infrastructure

Related open issues

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Apr 30, 2020
@justaugustus justaugustus added the sig/release Categorizes an issue or PR as relevant to SIG Release. label Apr 30, 2020
@k8s-ci-robot k8s-ci-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Apr 30, 2020
@justaugustus justaugustus added this to Backlog in SIG Release via automation Apr 30, 2020
@justaugustus justaugustus moved this from Backlog to In progress in SIG Release Apr 30, 2020
@justaugustus justaugustus added the tracked/out-of-tree Denotes an out-of-tree enhancement issue, which does not need to be tracked by the Release Team label Apr 30, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2020
@LappleApple LappleApple moved this from In progress to In Progress, but no activity in >=14 days in SIG Release Aug 7, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 29, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@LappleApple LappleApple moved this from In Progress, but no activity in >=14 days to Blocked/waiting for feedback in SIG Release Oct 7, 2020
@LappleApple LappleApple moved this from Blocked/waiting for feedback to Done (1.20) in SIG Release Oct 7, 2020
@justaugustus justaugustus removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 3, 2020
@justaugustus justaugustus reopened this Dec 3, 2020
SIG Release automation moved this from Done/Closed (1.20) to In progress Dec 3, 2020
@justaugustus justaugustus added this to the v1.21 milestone Dec 3, 2020
@justaugustus justaugustus removed this from In progress in SIG Release Dec 3, 2020
@justaugustus justaugustus added this to Backlog in Artifact Management (SIG Release) via automation Dec 3, 2020
@justaugustus justaugustus moved this from Backlog to KEPs in Artifact Management (SIG Release) Dec 3, 2020
@justaugustus justaugustus self-assigned this Dec 3, 2020
@justaugustus justaugustus added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Dec 3, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 3, 2021
@xmudrii
Copy link
Member

xmudrii commented Jul 4, 2023

Does this enhancement work planned for 1.28 require any new docs or modification to existing docs?

Yes. I'll follow up on this with folks and I'll make sure that we have everything in place by the deadline.

@ramrodo
Copy link
Member

ramrodo commented Jul 6, 2023

Hello @xmudrii, 1.28 Comms here.

Is this enhancement planned to have an opt-in process for Feature Blog delivery?

The deadline for opt-in is on July 19th, 2023, so please consider submitting a place-holder PR at kubernetes/website for this to be considered.

Thank you!

@xmudrii
Copy link
Member

xmudrii commented Jul 10, 2023

@ramrodo Is just creating a PR enough to opt-in or is there anything else that we should do?

@ramrodo
Copy link
Member

ramrodo commented Jul 11, 2023

@ramrodo Is just creating a PR enough to opt-in or is there anything else that we should do?

Yes. Creating the placeholder PR is enough to opt-in for now.

@Rishit-dagli
Copy link
Member

@xmudrii

Yes. I'll follow up on this with folks and I'll make sure that we have everything in place by the deadline.

A reminder on this since there is 1 week to the deadline, this can even be a draft PR right now.

@xmudrii
Copy link
Member

xmudrii commented Jul 12, 2023

@Rishit-dagli Thank you, I'll make sure to create placeholders by the end of the week.

@xmudrii
Copy link
Member

xmudrii commented Jul 14, 2023

@Rishit-dagli @ramrodo I created placeholder PRs for both docs and feature blog: kubernetes/website#42022 and kubernetes/website#42023

@ruheenaansari34
Copy link

Hey again @justaugustus 👋
Just checking in as we approach Code freeze at 01:00 UTC Friday, 19th July 2023.

I don't see any code (k/k) update PR(s) in the issue description so if there are any k/k related PR(s) that we should be tracking for this KEP please link them in the issue description above.

As always, we are here to help if any questions come up. Thanks!

@xmudrii
Copy link
Member

xmudrii commented Jul 17, 2023

@ruheenaansari34 This is an out of tree KEP, so as of now, it doesn't require any code changes in k/k.

@Atharva-Shinde
Copy link
Contributor

Hey @justaugustus @xmudrii this enhancement is now marked as tracked for the v1.28 Code freeze.

@Rishit-dagli
Copy link
Member

Hello @justaugustus @xmudrii wave: please take a look at Documenting for a release - PR Ready for Review to get your docs PR ready for review before Tuesday 25th July 2023. Thank you!

Ref: kubernetes/website#42022

@sftim
Copy link
Contributor

sftim commented Jul 24, 2023

This is an out of tree KEP, so as of now, it doesn't require any code changes in k/k.

For the alpha, is there any change to document?

@xmudrii
Copy link
Member

xmudrii commented Jul 24, 2023

For the alpha, is there any change to document?

Yes, we plan to mention OBS in docs, but we're still trying to figure out the best way for that.

@sftim
Copy link
Contributor

sftim commented Jul 24, 2023

How about on https://k8s.dev/docs/ - and then revisit for beta?

@xmudrii
Copy link
Member

xmudrii commented Jul 24, 2023

@sftim I'm not sure if that's visible enough. Also, I don't think this KEP will be going through standard alpha/beta/stable criteria, but that's also something to discuss.

@npolshakova
Copy link

/remove-label lead-opted-in

@k8s-ci-robot k8s-ci-robot removed the lead-opted-in Denotes that an issue has been opted in to a release label Aug 27, 2023
@sftim
Copy link
Contributor

sftim commented Sep 26, 2023

This is still labelled alpha; is that appropriate?

@xmudrii
Copy link
Member

xmudrii commented Sep 26, 2023

We're hopefully going to graduate it to beta, I'll see with leads about that

@npolshakova
Copy link

Hello @justaugustus @xmudrii, 1.29 Enhancements team here! Is this enhancement targeting 1.29? If it is, can you follow the instructions here to opt in the enhancement and make sure the lead-opted-in label is set so it can get added to the tracking board? Thanks!

@salehsedghpour
Copy link
Contributor

Hello 👋 1.30 Enhancements Lead here,

I'm closing milestone 1.28 now,
If you wish to progress this enhancement in v1.30, please follow the instructions here to opt in the enhancement and make sure the lead-opted-in label is set so it can get added to the tracking board and finally add /milestone v1.30. Thanks!

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.28 milestone Jan 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2024
@xmudrii
Copy link
Member

xmudrii commented Apr 16, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/release Categorizes an issue or PR as relevant to SIG Release. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/out-of-tree Denotes an out-of-tree enhancement issue, which does not need to be tracked by the Release Team
Projects
Status: Tracked
Status: Tracked
Status: 📝 Tracking Issues
Development

No branches or pull requests