Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distroless images #1729

Closed
justaugustus opened this issue Apr 29, 2020 · 61 comments
Closed

Distroless images #1729

justaugustus opened this issue Apr 29, 2020 · 61 comments
Assignees
Labels
committee/security-response Denotes an issue or PR intended to be handled by the product security committee. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/release Categorizes an issue or PR as relevant to SIG Release. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status

Comments

@justaugustus
Copy link
Member

justaugustus commented Apr 29, 2020

Enhancement Description

Please to keep this description up to date. This will help the Enhancement Team track efficiently the evolution of the enhancement

Will update. Tracks kubernetes/kubernetes#70249.
/area security
/sig release

@k8s-ci-robot
Copy link
Contributor

@justaugustus: The label(s) area/security cannot be applied, because the repository doesn't have them

In response to this:

Enhancement Description

  • One-line enhancement description (can be used as a release note):
  • Kubernetes Enhancement Proposal: (link to kubernetes/enhancements file, if none yet, link to PR)
  • Primary contact (assignee):
  • Responsible SIGs: SIG Release
  • Enhancement target (which target equals to which milestone):
  • Alpha release target (x.y)
  • Beta release target (x.y)
  • Stable release target (x.y)

Please to keep this description up to date. This will help the Enhancement Team track efficiently the evolution of the enhancement

Will update. Tracks kubernetes/kubernetes#70249.
/area security
/sig release

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the sig/release Categorizes an issue or PR as relevant to SIG Release. label Apr 29, 2020
@justaugustus
Copy link
Member Author

/committee product-security

@k8s-ci-robot k8s-ci-robot added the committee/security-response Denotes an issue or PR intended to be handled by the product security committee. label Apr 29, 2020
@justaugustus justaugustus added this to Backlog in SIG Release via automation Apr 29, 2020
@justaugustus justaugustus moved this from Backlog to In progress in SIG Release Apr 29, 2020
@justaugustus
Copy link
Member Author

justaugustus commented May 21, 2020

/assign

Spoke w/ @tallclair earlier and I'm going to take this one over since @dims and I have been making some forward progress here, tracked in kubernetes/kubernetes#70249 + kubernetes/kubernetes#58012 and some other threads that I need to tie together.

cc: @yuwenma @dekkagaijin

@palnabarun -- This is going to require an exception, which I'm not quite ready to file, but will next week. Just putting it on your radar in the meantime...

/milestone v1.19
/stage beta

@k8s-ci-robot k8s-ci-robot added the stage/beta Denotes an issue tracking an enhancement targeted for Beta status label May 21, 2020
@k8s-ci-robot k8s-ci-robot added this to the v1.19 milestone May 21, 2020
@palnabarun
Copy link
Member

@justaugustus -- Thanks for the ping! 👍

I have added this enhancement to the tracking sheet and noted that an exception would be filed in the very near future.

I'm assuming this is the KEP for this enhancement.

@Conan-Kudo
Copy link

Conan-Kudo commented May 21, 2020

Are we using these images because we think that they're better? What makes them better? The KEP seems to just say "thinner and lighter" and mumbles about attack surface. Do we actually have real problems we're trying to solve here? Forgive me if I missed some discussion that can be pointed to about this, the PR about the KEP didn't seem to have any of that...

@tallclair
Copy link
Member

Motivations:

  • "thinner and lighter" - not a big win once you take shared base layers into account, but it's a bit better
  • mumble mumble attack surface - maybe, but most (all?) of the dependencies that get pulled in by heavier images aren't actually running, so it would be hard to exploit a vulnerability in them
  • fewer tools available to the attacker - even if an attacker gets code execution, the code that can be executed is limited
  • CVE noise - Kubernetes users often (and should) run a vulnerability scanner against containers in the cluster, and keeping control plane images free of known CVEs is an unnecessary hassle.

@Conan-Kudo
Copy link

@tallclair Honestly, if they own the image, they own the kubelet. If all your stuff lives in Kubernetes, I think it's pretty much game over anyway.

@Conan-Kudo
Copy link

Erk, I mean if they own the kubelet (the actively running thing in the image), then it doesn't matter. It's game over, and what else your image has doesn't matter.

@justaugustus
Copy link
Member Author

That does not and should not deter us from making efforts to ensure the images are more secure.

@Conan-Kudo
Copy link

Sure, but I still don't see how "distroless" images do that. Part of my skepticism comes from having looked into how those images are built. It's hard to trust images that just download and inject goop into the image that would be hard to reliably audit in the first place. 😉

@dims
Copy link
Member

dims commented May 22, 2020

@Conan-Kudo can you please give a specific example in kubernetes/kubernetes that you are uncomfortable with?

@Conan-Kudo
Copy link

It's not bad at the k8s level. My problem is mostly with how the actual base images are constructed. The "fetching and installing packages while not respecting dependencies" and the occasional "fetching and injecting from the internet" means that it's hard for me to trust it. It may be fine for k8s, given that most of it is in Go, but the inherent broken dependencies thing in distroless images makes me wary.

@Conan-Kudo
Copy link

Conan-Kudo commented May 22, 2020

The above also worries me for another reason: they chose Debian as the base of distroless, and yet continue to hacksaw it instead of working in Debian to fix things. I personally favor Fedora for my stuff, but when I find issues with images being too big in Fedora, instead of doing hacksaw maneuvers, I actually go and try to fix it in Fedora itself. The distro even has a minimization project to cut down the dependency web for containers to the absolute bare minimum when desired. In my view, this is the right approach to solving this problem. A Fedora variant of "distroless" would actually be ~40MB, based on their own tooling with no hacks.

@dims
Copy link
Member

dims commented May 22, 2020

"they chose Debian" .. you are talking about https://github.com/GoogleContainerTools/distroless/search?q=stretch&unscoped_q=stretch ?

@Conan-Kudo you can and should have your own CI/CD pipeline with your own images if you are doing anything meaningful and not rely on someone else's image. period. If our k8s makefiles/dockerfiles do not support it, then please report a bug and we will fix it. AFAIK, all our plumbing allows injecting base images of your own. So please do that.

@Conan-Kudo
Copy link

@dims Yes. And of course, I have my own pipelines for making images. I'm not lazy. 😉

But this is why I brought up the Fedora example. Because of the work by @asamalik and others, I have been able to make really tiny images in my own pipelines without hacksaw techniques, and that means the images are much easier to verify, validate, and audit. 😄

And while the tooling allows injecting my own base for this, defaults matter. Lots of people don't change a thing here, and that's why I'm talking about this at all.

@dims
Copy link
Member

dims commented May 22, 2020

@Conan-Kudo i'd request you to follow this up with an issue in GoogleContainerTools/distroless since you are worried about the techniques they use. If this is a request for kubernetes to switch over from those images to something else, we need to file a KEP to discuss options/possibilities etc. It's going to require a community decision and KEP's are the way we do it.

@Conan-Kudo
Copy link

I have no illusions that the GCP team would care about what I think. I may consider filing a KEP, once I'm a little more comfortable with my understanding of the process.

@tallclair
Copy link
Member

@Conan-Kudo I think your concerns may be with the distroless images in general, but maybe aren't applicable to the distroless:static image that we're actually using. The static image doesn't have any executables or libraries, and therefore doesn't have any dependencies that can be broken. It is meant as a base for statically compiled binaries only (no dependencies).

@Conan-Kudo
Copy link

@tallclair Perhaps, it'd also depend on how the Go compiler was sourced, but that does go quite a long way to alleviate issues... In general, I think the distroless images are a poor choice for reliable containers (for the reasons I outlined above), but naturally it's possible to mitigate those...

@LappleApple LappleApple moved this from In progress to In Progress, but no activity in >=14 days in SIG Release Aug 7, 2020
dekkagaijin pushed a commit to dekkagaijin/k8s-stackdriver that referenced this issue Aug 24, 2020
Rebasing the k8s images to distroless/static can make the images thinner, safer and less vulnerable.

Meanwhile, it will drastically reduce churn on the total number of k8s images versions. Due to the fact that many images are based on debian base and a vulnerability in debian base (a couple times a month) will result in rebuilding every image, changing the image from debian base to distroless/static can reduce the total number of k8s image versions.

See: https://github.com/kubernetes/enhancements/tree/master/keps/sig-release/1729-rebase-images-to-distroless
kubernetes/enhancements#1729
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@BenTheElder
Copy link
Member

Distroless kube-proxy base image is in-progress.

@BenTheElder BenTheElder removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 30, 2022
@BenTheElder
Copy link
Member

Distroless kube-proxy shipped in 1.25

@BenTheElder
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 26, 2022
@rhockenbury
Copy link

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.23 milestone Oct 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 30, 2022
@BenTheElder
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2023
@BenTheElder
Copy link
Member

We actually finished this in 1.25, the KEP needs updating.

kubernetes/kubernetes#111060
kubernetes/kubernetes#109406

@Atharva-Shinde Atharva-Shinde removed the tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team label May 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 20, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
committee/security-response Denotes an issue or PR intended to be handled by the product security committee. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/release Categorizes an issue or PR as relevant to SIG Release. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Projects
None yet
Development

No branches or pull requests