Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Arbitrary/Custom Metrics in the Horizontal Pod Autoscaler #117

Closed
5 of 7 tasks
DirectXMan12 opened this issue Oct 7, 2016 · 79 comments
Closed
5 of 7 tasks

Arbitrary/Custom Metrics in the Horizontal Pod Autoscaler #117

DirectXMan12 opened this issue Oct 7, 2016 · 79 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team

Comments

@DirectXMan12
Copy link
Contributor

DirectXMan12 commented Oct 7, 2016

Arbitrary/Custom Metrics in the Horizontal Pod Autoscaler

@DirectXMan12
Copy link
Contributor Author

DirectXMan12 commented Oct 7, 2016

cc @kubernetes/autoscaling @jszczepkowski @derekwaynecarr @smarterclayton

@idvoretskyi idvoretskyi modified the milestone: v1.5 Oct 11, 2016
@idvoretskyi idvoretskyi added the sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. label Oct 13, 2016
@idvoretskyi
Copy link
Member

@DirectXMan12 any updates on this issue? Can you provide the actual status of it and update the checkboxes above?

@DirectXMan12
Copy link
Contributor Author

DirectXMan12 commented Nov 16, 2016

The proposal is posted, but has not been approved yet. We only recently reached general consensus about the design. Still finalizing the exact semantics. It should be removed from the 1.5 milestone, since no code will have gone into 1.5

@idvoretskyi
Copy link
Member

@DirectXMan12 thank you for clarifying.

@idvoretskyi idvoretskyi modified the milestones: next-milestone, v1.5 Nov 16, 2016
@idvoretskyi idvoretskyi modified the milestones: v1.6, next-milestone Jan 30, 2017
@idvoretskyi idvoretskyi added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Jan 30, 2017
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Feb 10, 2017
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)

HPA v2 (API Changes)

**Release note**:
```release-note
Introduces an new alpha version of the Horizontal Pod Autoscaler including expanded support for specifying metrics.
```

Implements the API changes for kubernetes/enhancements#117.

This implements #34754, which is the new design for the Horizontal Pod Autoscaler.  It includes improved support for custom metrics (and/or arbitrary metrics) as well as expanded support for resource metrics.  The new HPA object is introduces in the API group "autoscaling/v1alpha1".

Note that the improved custom metric support currently is limited to per pod metrics from Heapster -- attempting to use the new "object metrics" will simply result in an error.  This will change once #34586 is merged and implemented.
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Feb 20, 2017
Automatic merge from submit-queue

Convert HPA controller to support HPA v2 mechanics

This PR converts the HPA controller to support the mechanics from HPA v2.
The HPA controller continues to make use of the HPA v1 client, but utilizes
the conversion logic to work with autoscaling/v2alpha1 objects internally.

It is the follow-up PR to #36033 and part of kubernetes/enhancements#117.

**Release note**:
```release-note
NONE
```
@idvoretskyi
Copy link
Member

@DirectXMan12 please, provide us with the release notes and documentation PR (or links) at https://docs.google.com/spreadsheets/d/1nspIeRVNjAQHRslHQD1-6gPv99OcYZLMezrBe3Pfhhg/edit#gid=0

@DirectXMan12
Copy link
Contributor Author

@DirectXMan12 please, provide us with the release notes and documentation PR (or links)

done :-)

@bgrant0607
Copy link
Member

@DirectXMan12 @mwielgus What is planned for HPA in 1.8?

@fgrzadkowski
Copy link

@MaciekPytel @kubernetes/sig-autoscaling-misc

@evmin
Copy link

evmin commented Jul 21, 2017

Looking forward to the beta launch!

@DirectXMan12
Copy link
Contributor Author

@bgrant0607 we're hoping to move to v2 to beta in 1.8 (so just stabilization :-) ).

@davidopp
Copy link
Member

Regarding the functionality - am I correct in understanding that if you wanted to scale on a load indicator that is fundamentally external to Kubernetes, you would need to

  1. create some kind of proxy API object inside the cluster (maybe a CRD?) that reflects the load indicator in a manner that the Kubernetes metrics pipeline can consume
  2. create an HPA with MetricSourceType == "Object" and point to the proxy object

@DirectXMan12
Copy link
Contributor Author

DirectXMan12 commented Jul 23, 2017

@davidopp That is incorrect. In order to scale on a load indicator that's not one of the metrics provided by the resource metrics API (CPU, memory), you need to have some implementation of the custom metrics API (see k8s.io/metrics and kubernetes-incubator/custom-metrics-apiserver).

Then, you can either use the "pods" source type, if the metric describes the pods controlled the the target scalable of the HPA (e.g. network throughput), or the "object" source type, if the metric describes an unrelated object (for instance, you might scale on a queue length metric attached to the namespace).

In either case, the HPA controller will the query the custom metrics API accordingly. It is up to cluster admins, etc to actually provide a method to collect the given metrics and expose an implementation of the custom metrics API.

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 31, 2019
@palnabarun
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 9, 2020
@jeremyrickard
Copy link
Contributor

Hey there @mwielgus and @josephburnett -- 1.18 Enhancements lead here. I wanted to check in and see if you think this Enhancement will be graduating to stable in 1.18 or having a major change in it's current level?

The current release schedule is:
Monday, January 6th - Release Cycle Begins
Tuesday, January 28th EOD PST - Enhancements Freeze
Thursday, March 5th, EOD PST - Code Freeze
Monday, March 16th - Docs must be completed and reviewed
Tuesday, March 24th - Kubernetes 1.18.0 Released

To be included in the release, this enhancement must have a merged KEP in the implementable status. The KEP must also have graduation criteria and a Test Plan defined.
If you would like to include this enhancement, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
We'll be tracking enhancements here: http://bit.ly/k8s-1-18-enhancements
Thanks!

@jeremyrickard
Copy link
Contributor

Hey there @mwielgus and @josephburnett, Enhancements Team reaching out again. We're about a week out from Enhancement Freeze on the 28th. Let us know if you think there will be any activity on this.

@DirectXMan12 DirectXMan12 removed their assignment Jan 27, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2020
@palnabarun
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2020
@palnabarun
Copy link
Member

Hey there @mwielgus @josephburnett -- 1.19 Enhancements Lead here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?

In order to have this part of the release:

  1. The KEP PR must be merged in an implementable state
  2. The KEP must have test plans
  3. The KEP must have graduation criteria.

The current release schedule is:

  • Monday, April 13: Week 1 - Release cycle begins
  • Tuesday, May 19: Week 6 - Enhancements Freeze
  • Thursday, June 25: Week 11 - Code Freeze
  • Thursday, July 9: Week 14 - Docs must be completed and reviewed
  • Tuesday, August 4: Week 17 - Kubernetes v1.19.0 released
  • Thursday, August 20: Week 19 - Release Retrospective

If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍

Thanks!

@palnabarun
Copy link
Member

Hi @mwielgus @josephburnett, pinging back again as a reminder. 🙂

@palnabarun
Copy link
Member

Hi @mwielgus @josephburnett

Tomorrow, Tuesday May 19 EOD Pacific Time is Enhancements Freeze

Will this enhancement be part of the 1.19 release cycle?

@palnabarun
Copy link
Member

@mwielgus @josephburnett -- Unfortunately, the deadline for the 1.19 Enhancement freeze has passed. For now, this is being removed from the milestone and 1.19 tracking sheet. If there is a need to get this in, please file an enhancement exception.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 18, 2020
@palnabarun
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 1, 2020
@kikisdeliveryservice
Copy link
Member

Hi @mwielgus @josephburnett

Enhancements Lead here. Any plans for this to graduate in 1.20?

Thanks!
Kirsten

@kikisdeliveryservice
Copy link
Member

Hi @mwielgus @josephburnett

Any updates on whether this will be included in 1.20?

Enhancements Freeze is October 6th and by that time we require:

The KEP must be merged in an implementable state
The KEP must have test plans
The KEP must have graduation criteria
The KEP must have an issue in the milestone

I note that your design proposals are quite old, please consider updating to the new KEP format. See: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template

Thanks
Kirsten

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 27, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 26, 2021
@pytimer
Copy link

pytimer commented Jan 27, 2021

@DirectXMan12 I read the Enhance HPA Metrics Specificity doc, i also use the hpa with metricLabelSelector, but i found because of metricLabelSelector is labels.Selector, so if my metric label value is /v1/task, i cannot use selector, becasue label value only allow (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?'), could consider support filter any value, not only k8s label selector regexp?

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

howardjohn pushed a commit to howardjohn/enhancements that referenced this issue Oct 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Projects
None yet
Development

No branches or pull requests