Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

contextual logging #3077

Open
7 of 12 tasks
pohly opened this issue Dec 6, 2021 · 53 comments
Open
7 of 12 tasks

contextual logging #3077

pohly opened this issue Dec 6, 2021 · 53 comments
Assignees
Labels
lead-opted-in Denotes that an issue has been opted in to a release sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status wg/structured-logging Categorizes an issue or PR as relevant to WG Structured Logging.
Milestone

Comments

@pohly
Copy link
Contributor

pohly commented Dec 6, 2021

Enhancement Description


Current configuration

https://github.com/kubernetes/kubernetes/blob/master/hack/logcheck.conf

Status

The following table counts log calls that need to be converted. The numbers for contextual logging include those for structured logging.

At this point, controllers could get converted to contextual logging or one of the components that was already converted to structured logging. If you want to pick one, ping @pohly on the #wg-structured-logging Slack channel. See structured and contextual logging migration instructions for guidance.

Besides migrating log calls, we also might have to migrate from APIs which don't support contextual logging to APIs which do:

From 2022-10-27 ~= Kubernetes 1.26

The focus was on converting kube-controller-manager. Of 1944 unstructured and/or non-contexual logging calls in pkg/controller and cmd/kube-controller-manager, 82% were converted to structured, contextual logging in Kubernetes 1.27.

Component Non-Structured Logging Non-Contextual Logging Owner
pkg/controller/bootstrap 15 28 @mengjiao-liu, kubernetes/kubernetes#113464
pkg/controller/certificates 22 31 @mengjiao-liu, kubernetes/kubernetes#113994
pkg/controller/clusterroleaggregation 2 2 @mengjiao-liu, kubernetes/kubernetes#113910
pkg/controller/cronjob 1 44 @mengjiao-liu, kubernetes/kubernetes#113428
pkg/controller/daemon 45 85 @249043822, kubernetes/kubernetes#113622
pkg/controller/deployment 23 79 @249043822, kubernetes/kubernetes#113525
pkg/controller/disruption 29 56 @Namanl2001, kubernetes/kubernetes#116021
pkg/controller/endpoint 12 24 lunhuijie (Slack)
pkg/controller/endpointslice 22 36 @Namanl2001, kubernetes/kubernetes#115295
pkg/controller/endpointslicemirroring 18 28 @Namanl2001, kubernetes/kubernetes#114982
pkg/controller/garbagecollector 55 105 @ncdc, kubernetes/kubernetes#113471
pkg/controller/job 12 36 was: @sanwishe, kubernetes/kubernetes#113576, now: @mengjiao-liu
pkg/controller/namespace 30 55 @yangjunmyfm192085, kubernetes/kubernetes#113443
pkg/controller/nodeipam 135 210 @yangjunmyfm192085, kubernetes/kubernetes#112670
pkg/controller/nodelifecycle 60 106 @yangjunmyfm192085, kubernetes/kubernetes#112670
pkg/controller/podautoscaler 9 13 @freddie400, kubernetes/kubernetes#114687
pkg/controller/podgc 10 24 @pravarag, kubernetes/kubernetes#114689
pkg/controller/replicaset 20 49 @Namanl2001, kubernetes/kubernetes#114871
pkg/controller/resourcequota 24 37 @ncdc, kubernetes/kubernetes#113315
pkg/controller/serviceaccount 22 31 @Namanl2001, kubernetes/kubernetes#114918
pkg/controller/statefulset 19 59 @249043822, kubernetes/kubernetes#113840
pkg/controller/storageversiongc 4 6 @songxiao-wang87, kubernetes/kubernetes#113986
pkg/controller/testutil 9 9 @Octopusjust, kubernetes/kubernetes#114061
pkg/controller/ttl 4 8 wxs (Slack) = @songxiao-wang87, kubernetes/kubernetes#113916
pkg/controller/ttlafterfinished 9 15 @obaranov1, kubernetes/kubernetes#115332
pkg/controller/util 0 19 @fatsheep9146, kubernetes/kubernetes#115049
pkg/controller/volume 351 673 @yangjunmyfm192085, kubernetes/kubernetes#113584
pkg/kubelet 1 1805 @fmuyassarov
pkg/scheduler 0 348 @knelasevero, kubernetes/kubernetes#111155
staging/src/k8s.io/apiextensions-apiserver 57 81
staging/src/k8s.io/apimachinery 73 114 @yanjing1104
staging/src/k8s.io/apiserver 262 543
staging/src/k8s.io/client-go 161 267
staging/src/k8s.io/cloud-provider 108 146
staging/src/k8s.io/cluster-bootstrap 2 4
staging/src/k8s.io/code-generator 108 168
staging/src/k8s.io/component-base 32 63
staging/src/k8s.io/component-helpers 7 8
staging/src/k8s.io/controller-manager 10 10
staging/src/k8s.io/csi-translation-lib 3 4
staging/src/k8s.io/kube-aggregator 52 76
staging/src/k8s.io/kube-controller-manager 0 0
staging/src/k8s.io/kubectl 89 147 @yanjing1104
staging/src/k8s.io/legacy-cloud-providers 1445 2238
staging/src/k8s.io/mount-utils 54 92
staging/src/k8s.io/pod-security-admission 1 34 @Namanl2001, kubernetes/kubernetes#114471
staging/src/k8s.io/sample-controller 16 22 @pchan, kubernetes/kubernetes#113879

From 2023-03-17 = Kubernetes v1.27.0-beta.0

All of kube-controller-manager got converted.

Tables created with:

go install sigs.k8s.io/logtools/logcheck@latest

echo "Component | Non-Structured Logging | Non-Contextual Logging | Owner " && \
echo "------ | ------- | ------" && \
for i in $(find pkg/controller/* pkg/scheduler pkg/kubelet pkg/apis pkg/api cmd/kube-* cmd/kubelet  -maxdepth 0 -type d | sort); do \
     echo "$i | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false 2>&1 ./... | wc -l ) | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false -check-contextual ./... 2>&1 | wc -l ) |" | grep -v '| 0 | 0 |'; \
done
Component Non-Structured Logging Non-Contextual Logging Owner
cmd/kube-apiserver 7 8 on hold
cmd/kubelet 0 47 @fmuyassarov
cmd/kube-proxy 0 46 on hold
pkg/controller/certificates 22 31 @mengjiao-liu, kubernetes/kubernetes#113994
pkg/controller/deployment 2 5 @fatsheep9146, kubernetes/kubernetes#116930
pkg/controller/disruption 29 54 @obaranov1, kubernetes/kubernetes#116021, @mengjiao-liu, kubernetes/kubernetes#119147
pkg/controller/endpoint 12 24 @my-git9, kubernetes/kubernetes#116755
pkg/controller/endpointslice 20 35 @Namanl2001, kubernetes/kubernetes#115295
pkg/controller/endpointslicemirroring 18 28 @Namanl2001, kubernetes/kubernetes#114982
pkg/controller/garbagecollector 3 3 @fatsheep9146, kubernetes/kubernetes#116930
pkg/controller/job 12 35 @sanwishe, kubernetes/kubernetes#113576 (needs new owner?)
pkg/controller/nodeipam 8 13 @fatsheep9146, kubernetes/kubernetes#116930
pkg/controller/podgc 10 24 @pravarag, kubernetes/kubernetes#114689, @pohly, kubernetes/kubernetes#119250
pkg/controller/replicaset 9 18 @fatsheep9146, kubernetes/kubernetes#116930
pkg/controller/statefulset 3 5 @kerthcet, kubernetes/kubernetes#118071
pkg/controller/testutil 9 9 @Octopusjust, kubernetes/kubernetes#114061
pkg/controller/util 0 4 @fatsheep9146, kubernetes/kubernetes#116930
pkg/controller/volume 5 20 @fatsheep9146, kubernetes/kubernetes#116930
pkg/kubelet 2 1923 @fmuyassarov, kubernetes/kubernetes#114352
pkg/scheduler 2 349 @mengjiao-liu, kubernetes/kubernetes#91633

From 2023-09-18 =~ Kubernetes v1.28

Component Non-Structured Logging Non-Contextual Logging Owner
cmd/kube-apiserver 6 7 on hold
cmd/kubelet 0 52 @fmuyassarov (?), kubernetes/kubernetes#114352
cmd/kube-proxy 0 41 on hold
pkg/kubelet 2 1942 @fmuyassarov (?)
pkg/scheduler 1 137 @mengjiao-liu, https://github.com/kubernetes/kubernetes/pulls/mengjiao-liu
staging/src/k8s.io/apiserver ? ? @tallclair, kubernetes/kubernetes#114198
staging/src/k8s.io/client-go/discovery 11 21 on hold
staging/src/k8s.io/client-go/examples 14 14 on hold
staging/src/k8s.io/client-go/metadata 2 4 on hold
staging/src/k8s.io/client-go/plugin 5 8 on hold
staging/src/k8s.io/client-go/rest 16 37 on hold
staging/src/k8s.io/client-go/restmapper 3 6 on hold
staging/src/k8s.io/client-go/tools 104 171 @pohly, kubernetes/kubernetes#120729
staging/src/k8s.io/client-go/transport 17 31 on hold
staging/src/k8s.io/client-go/util 12 19 on hold

Table created manually and with:

go install sigs.k8s.io/logtools/logcheck@latest

echo "Component | Non-Structured Logging | Non-Contextual Logging | Owner " && \
echo "------ | ------- |  ------- | ------" && \
for i in $(find pkg/scheduler pkg/kubelet pkg/apis pkg/api cmd/kube-* cmd/kubelet staging/src/k8s.io/client-go/* -maxdepth 0 -type d | sort); do \
     echo "$i | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false 2>&1 ./... | wc -l ) | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false -check-contextual ./... 2>&1 | wc -l ) |" | grep -v '| 0 | 0 |'; \
done

From 2023-11-20 =~ Kubernetes v1.29

Component Non-Structured Logging Non-Contextual Logging Owner
cmd/kube-apiserver 6 7 @tallclair
cmd/kubelet 0 52 @fmuyassarov (?), kubernetes/kubernetes#114352
pkg/kubelet 2 1983 @fmuyassarov
cmd/kube-proxy 0 42 @ fatsheep9146, kubernetes/kubernetes#122197
pkg/proxy 0 360 @ fatsheep9146, see above
staging/src/k8s.io/apiserver 285 655 @tallclair, kubernetes/kubernetes#114198
staging/src/k8s.io/client-go/discovery 11 21
staging/src/k8s.io/client-go/examples 14 14
staging/src/k8s.io/client-go/metadata 2 4
staging/src/k8s.io/client-go/plugin 5 8
staging/src/k8s.io/client-go/rest 16 37
staging/src/k8s.io/client-go/restmapper 3 6
staging/src/k8s.io/client-go/tools 83 143 @pohly
staging/src/k8s.io/client-go/transport 17 31
staging/src/k8s.io/client-go/util 12 19

Table created with:

go install sigs.k8s.io/logtools/logcheck@latest

echo "Component | Non-Structured Logging | Non-Contextual Logging | Owner " && echo "------ | ------- |  ------- | ------" && for i in $(find pkg/scheduler pkg/kubelet pkg/apis pkg/api cmd/kube-* cmd/kubelet staging/src/k8s.io/client-go/* staging/src/k8s.io/apiserver -maxdepth 0 -type d | sort); do      echo "$i | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false 2>&1 ./... | wc -l ) | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false -check-contextual ./... 2>&1 | wc -l ) |" | grep -v '| 0 | 0 |'; done
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Dec 6, 2021
@pohly
Copy link
Contributor Author

pohly commented Dec 8, 2021

/sig instrumentation
/wg structured-logging

@k8s-ci-robot k8s-ci-robot added sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. wg/structured-logging Categorizes an issue or PR as relevant to WG Structured Logging. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 8, 2021
@gracenng gracenng added the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Jan 17, 2022
@gracenng gracenng added this to the v1.24 milestone Jan 17, 2022
@hosseinsalahi
Copy link

hosseinsalahi commented Jan 21, 2022

Hello @pohly

v1.24 Enhancements team here.

Just checking in as we approach enhancements freeze on 18:00pm PT on Thursday Feb 3rd, 2022. This enhancement is targeting alpha for v1.24,

Here’s where this enhancement currently stands:

  • Updated KEP file using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable for this release
  • KEP has a test plan section filled out.
  • KEP has up to date graduation criteria.
  • KEP has a production readiness review that has been completed and merged into k/enhancements.

The status of this enhancement is marked as tracked. Please keep the issue description and the targeted stage up-to-date for release v1.24.
Thanks!

@pohly
Copy link
Contributor Author

pohly commented Feb 3, 2022

@encodeflush : the KEP PR was merged, all criteria for alpha in 1.24 should be met now.

@chrisnegus
Copy link

Hi @pohly 👋 1.24 Docs shadow here.

This enhancement is marked as 'Needs Docs' for the 1.24 release.

Please follow the steps detailed in the documentation to open a PR against the dev-1.24 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thu March 31, 11:59 PM PDT.

Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.

Thanks!

@valaparthvi
Copy link

valaparthvi commented Mar 21, 2022

Hi @pohly 👋 1.24 Release Comms team here.

We have an opt-in process for the feature blog delivery. If you would like to publish a feature blog for this issue in this cycle, then please opt in on this tracking sheet.

The deadline for submissions and the feature blog freeze is scheduled for 01:00 UTC Wednesday 23rd March 2022 / 18:00 PDT Tuesday 22nd March 2022. Other important dates for delivery and review are listed here: https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24#timeline.

For reference, here is the blog for 1.23.

Please feel free to reach out any time to me or on the #release-comms channel with questions or comments.

Thanks!

@hosseinsalahi
Copy link

hosseinsalahi commented Mar 21, 2022

Hello @pohly

I'm just checking in once more as we approach the 1.24 Code Freeze on 18:00 PDT, Tuesday, March 29th 2022

Please ensure the following items are completed:

For note, the status of this enhancement is currently marked as tracked.

Thank you!

@pohly
Copy link
Contributor Author

pohly commented Mar 23, 2022

/assign

@pohly
Copy link
Contributor Author

pohly commented Mar 23, 2022

I have added two doc PRs to the description.

@Priyankasaggu11929
Copy link
Member

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.24 milestone May 10, 2022
@Priyankasaggu11929 Priyankasaggu11929 added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels May 10, 2022
@logicalhan
Copy link
Member

@pohly can we close this?

@logicalhan
Copy link
Member

/assign @serathius

@yanjing1104
Copy link

Hi @pohly Sry for the inconvenience. but I won't be able to follow these two PRs and have closed them, could you help with re-assign/release these two features?
staging/src/k8s.io/apimachinery (kubernetes/kubernetes#115317)
staging/src/k8s.io/kubectl (kubernetes/kubernetes#115087)

@pohly
Copy link
Contributor Author

pohly commented Dec 23, 2023

Please wait with submitting PRs. I need to more time to actually look at some of the packages before I can provide guidance on how to proceed.

@WillardHu
Copy link

Some design discussion of component cliant-go/rest migration to contextual logging

  1. Usually the client-go returns the rest.Request and the rest.Result for the caller to use. The creation of their instances is controlled by the client-go. Can we define the logger as a struct field? like:
    type Request struct {
        ...
        logger *klog.Logger
        ...
    }
    
    func (r *Request) Fun() {
        r.log().Info(...)
    }
    
    // If the caller does not define it, a default one is returned
    func (r *Request) log() klog.Logger {
        if r.logger == nil {
            return klog.Background().WithName("rest_request")
        }
        return *r.logger
    }
  2. The creation of the rest.Config is controlled by the caller, we add a context parameter for some functions used to build rest.Config, like:
    • Directly modify:
      func InClusterConfig(ctx context.Context) (*Config, error) {
          logger := klog.FromContext(ctx).WithName("rest_config")
          ...
      }
    • Consider compatibility:
      func InClusterConfig() (*Config, error) {
          return InClusterConfigWithContext(context.TODO())
      }
      
      func InClusterConfigWithContext(ctx context.Context) (*Config, error) {
          logger := klog.FromContext(ctx).WithName("rest_config")
          ...
      }
    Should we consider compatibility?
  3. Can other structs used internally refer to question 1, and can we add a context parameter for some internal use functions?

@pohly
Copy link
Contributor Author

pohly commented Dec 24, 2023

Let me elaborate further... the problem with changing client-go or any other package under staging is that we cannot simply change an API. It breaks to much existing code. Instead, we have to extend the API in a backwards compatible way. Adding a klog.TODO or some other TODO remark doesn't help because it doesn't solve the API problem.

But our work isn't done at that point. Adding a new API is pointless if it doesn't get used by the Kubernetes components that we maintain. Out-of-tree components may also want to know that they should switch to the new API. Adding a "Deprecated" remark is too strong, the existing APIs are fine.

What I came up with is //logcheck:context as a special remark that tells logcheck to complain about an API, but only in code which cares about contextual logging. This isn't a solution in all cases, but at least when adding a WithContext variant it works.

So whoever now starts converting some package first has to look at existing usage of an API and then figure out how to change the API and that code - this is not easy, so beware before signing up to do this!

@WillardHu
Copy link

WillardHu commented Dec 27, 2023

Thanks for your guidance, I combed through structs and functions call relationships in rest packages.

  1. I add a logger field to the Request{} and Result{} structs and use methods to control their behavior:
    type Request struct {
        ...
        // logger you can set it using the SetLoggerFromContext(..) method.
        logger *logr.Logger
    }
    
    // SetLoggerWithContext retrieves a logger set by the caller and creates a new logger
    // with the constant name for the Reqeust's field logger.
    func (r *Request) SetLoggerFromContext(ctx context.Context) {
        inst := klog.FromContext(ctx).WithName(loggerNameRequest)
        r.logger = &inst
    }
    
    // log returns a not nil logger to be used by the methods. If the logger field is not defined,
    // sets a default logger for it.
    func (r *Request) log() logr.Logger {
        if r.logger == nil {
            def := klog.Background().WithName(loggerNameRequest)
            r.logger = &def
        }
        return *r.logger
    }
    Request public methods can use r.log() to write structured logging, and some private methods can use it to build contextual logging to call internal functions and tool methods.
  2. They are internal functions and tool methods for the Request{} in urlbackoff.go, warnings.go and with_retry.go , so they can be converted to contextual logging.
  3. These funcionts in plugin.go are called by AuthProvider implementations init(), so add some comment tags //logcheck:context to these functions.
  4. Some of Config's creation functions are used infrequently, perhaps only once per project, so I figured callers wouldn't care much about the contextual logging, so add the commont tag //logcheck:context too.

I tried to revise my PR again, please check whether it is what you expected, thank you.

@pohly
Copy link
Contributor Author

pohly commented Dec 27, 2023

@WillardHu: This issue is not a good place to discuss API design aspects. Let's do that on Slack.

@WillardHu
Copy link

@WillardHu: This issue is not a good place to discuss API design aspects. Let's do that on Slack.

OK, thanks

@dashpole
Copy link
Contributor

/label lead-opted-in

@dashpole dashpole added this to the v1.30 milestone Jan 18, 2024
@k8s-ci-robot k8s-ci-robot added the lead-opted-in Denotes that an issue has been opted in to a release label Jan 18, 2024
@tjons
Copy link

tjons commented Jan 31, 2024

Hello @pohly 👋, Enhancements team here.

Just checking in as we approach enhancements freeze on Friday, February 9th, 2024 at 02:00 UTC.

This enhancement is targeting for stage beta for 1.30 (correct me, if otherwise)

Here's where this enhancement currently stands:

  • KEP readme using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable for latest-milestone: 1.30. KEPs targeting stable will need to be marked as implemented after code PRs are merged and the feature gates are removed.
  • KEP readme has up-to-date graduation criteria
  • KEP has a production readiness review that has been completed and merged into k/enhancements. (For more information on the PRR process, check here).

For this KEP, we would just need to complete the following:

  • Merge the KEP changes readme into the k/enhancements repo.
  • Complete the PRR review process and merge it into k/enhancements.
  • Mark this KEP as implementable for latest-milestone: 1.30.

The status of this enhancement is marked as at risk for enhancement freeze. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

@pohly
Copy link
Contributor Author

pohly commented Feb 7, 2024

KEP PR for 1.30 got merged.

@tjons
Copy link

tjons commented Feb 8, 2024

Hey @pohly - with all the requirements fulfilled this enhancement is now marked as tracked for the upcoming enhancements freeze 🚀! Thanks for your hard work

@tjons
Copy link

tjons commented Feb 9, 2024

Hello 👋, 1.30 Enhancements team here.

Unfortunately, this enhancement did not meet requirements for enhancements freeze.

I made an error, this question under scalability is now required in the KEP: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template#can-enabling--using-this-feature-result-in-resource-exhaustion-of-some-node-resources-pids-sockets-inodes-etc

If you still wish to progress this enhancement in 1.30, please file an exception request. Thanks!

@k8s-ci-robot
Copy link
Contributor

@tjons: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Milestone Maintainers Team and have them propose you as an additional delegate for this responsibility.

In response to this:

/milestone clear

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@salehsedghpour
Copy link
Contributor

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.30 milestone Feb 9, 2024
@salehsedghpour salehsedghpour added this to the v1.30 milestone Feb 13, 2024
@drewhagen
Copy link
Member

drewhagen commented Feb 15, 2024

Hello @pohly 👋, 1.30 Docs Lead here.

Does this enhancement work planned for 1.30 require any new docs or modification to existing docs?
If so, please follows the steps here to open a PR against dev-1.30 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday February 22nd 2024 18:00 PDT.

Also, take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release.
Thank you!

@natalisucks
Copy link

Hi @pohly, @shivanshu1333, and @serathius,

👋 from the v1.30 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement!

We encourage blogs for features including, but not limited to: breaking changes, features and changes important to our users, and features that have been in progress for a long time and are graduating.

To opt in, you need to open a Feature Blog placeholder PR against the website repository.
The placeholder PR deadline is 27th February, 2024.

Here's the 1.30 Release Calendar

@pohly
Copy link
Contributor Author

pohly commented Feb 22, 2024

Doc PR for 1.30 created and linked to in the description

@tjons
Copy link

tjons commented Feb 25, 2024

Hey again @pohly 👋 Enhancements team here,

Just checking in as we approach code freeze at 02:00 UTC Wednesday 6th March 2024 .

Here's where this enhancement currently stands:

  • All PRs to the Kubernetes repo that are related to your enhancement are linked in the above issue description (for tracking purposes).
  • All PR/s are ready to be merged (they have approved and lgtm labels applied) by the code freeze deadline. This includes tests.

For this enhancement, it looks like the following PRs are open and need to be merged before code freeze:

Also, please let me know if there are other PRs in k/k we should be tracking for this KEP.
As always, we are here to help if any questions come up. Thanks!

@tjons
Copy link

tjons commented Mar 6, 2024

Hi @pohly - checking in again here. There are about ~6 hours remaining until code freeze. Do you think you'll be able to merge kubernetes/kubernetes#120696 in time?

@salehsedghpour
Copy link
Contributor

Hello @pohly 👋 , Enhancements team here.

With all the implementation(code related) PRs merged as per the issue description:

This enhancement is now marked as tracked for code freeze for the v1.30 Code Freeze!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lead-opted-in Denotes that an issue has been opted in to a release sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status wg/structured-logging Categorizes an issue or PR as relevant to WG Structured Logging.
Projects
Status: Tracked for Doc Freeze
Development

No branches or pull requests