Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added the new custom resources RolloutOrchestrator&StagePodAutoscaler for rolling upgrade #12

Merged

Conversation

houshengbo
Copy link
Contributor

@houshengbo houshengbo commented Oct 25, 2023

Changes

/kind

Fixes #13

Release Note


Docs


@knative-prow
Copy link

knative-prow bot commented Oct 25, 2023

@houshengbo: The label(s) kind/<kind> cannot be applied, because the repository doesn't have them.

In response to this:

Changes

/kind

Fixes #

Release Note


Docs


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@knative-prow knative-prow bot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Oct 25, 2023
@codecov
Copy link

codecov bot commented Oct 25, 2023

Codecov Report

Attention: 27 lines in your changes are missing coverage. Please review.

Comparison is base (d6ef1e3) 21.31% compared to head (4be398b) 72.16%.
Report is 1 commits behind head on main.

❗ Current head 4be398b differs from pull request most recent head 6f80d02. Consider uploading reports for the commit 6f80d02 to get more accurate results

Files Patch % Lines
...g/apis/serving/v1/rolloutorchestrator_lifecycle.go 58.18% 23 Missing ⚠️
pkg/apis/serving/v1/rolloutorchestrator_types.go 33.33% 4 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main      #12       +/-   ##
===========================================
+ Coverage   21.31%   72.16%   +50.85%     
===========================================
  Files           9        5        -4     
  Lines          61       97       +36     
===========================================
+ Hits           13       70       +57     
+ Misses         48       27       -21     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@houshengbo houshengbo force-pushed the add-service-autoscaler branch 2 times, most recently from 42012aa to ff06df9 Compare October 26, 2023 20:53
Comment on lines 78 to 86
type StagePodAutoscalerSpec struct {
// MinScale sets the lower bound for the number of the replicas.
// +optional
MinScale *int32 `json:"minScale,omitempty"`

// MaxScale sets the upper bound for the number of the replicas.
// +optional
MaxScale *int32 `json:"maxScale,omitempty"`
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this ? min/max scale can be got from the service or revision spec.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite, the service and revision can only determine the ultimate min and max scale. The min and max scale here are for the current stage.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The min/max for each stage shouldn't be different as it is specified by user right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They both change from stage to stage, in order to control the number of replicas.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then it is not min/max ? Min/max should stay the same for each revision as this is the user input, from what I understand you update this number for each stage and eventually the number of replica can reach to min or max. Is that correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes.

Comment on lines +54 to +57
DesiredScale *int32 `json:"desiredScale,omitempty"`

// ActualScale shows the actual number of replicas for the revision.
ActualScale *int32 `json:"actualScale,omitempty"`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These fields already exist on KPA autoscaler, can we update KPA status object instead?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Knative already create too many child custom resources , let's try not creating new ones unless we really need to.

Copy link
Contributor Author

@houshengbo houshengbo Oct 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the good point, but I really need this ActualScale in the status of this stagepodautoscaler.
This reason is as below:

This change in podautoscaler is only able to kick-off the reconcile loop of the revision. However, I need any change in podautoscaler to kick off the reconcile loop of the service orchestrator.
So I have to use podautoscaler to kick off the reconcile loop of stagepodautoscaler, since they share the same name. And with the change of ActualScale in the stagepodautoscaler to kick off the reconcile loop of the service orchestrator.

Comment on lines 109 to 117
// TargetRevisions holds the information of the target revisions in the final stage.
// These entries will always contain RevisionName references.
// +optional
TargetRevisions []TargetRevision `json:"targetRevisions,omitempty"`

// InitialRevisions holds the information of the initial revisions in the initial stage.
// These entries will always contain RevisionName references.
// +optional
InitialRevisions []TargetRevision `json:"initialRevisions,omitempty"`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you help document an example how InitialRevision and TargetRevision are used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

  1. When there is no revision available, the InitialRevisions will be empty, and the TargetRevisions will be empty.
  2. When there is an existing revision, and we create a new revision, InitialRevisions will be set to the existing revision with 100% traffic, and TargetRevisions will be set to the new revision with 100% traffic.
  3. During the transition from the old to new revision, TargetRevisions and InitialRevisions remain the same as TargetRevisions pointing to the new revision of the ultimate goal, and InitialRevision pointing to the old revision of the initial status.
  4. When the transition is over, InitialRevisions and TargetRevisions will be reset to empty.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, good to leave this as comments on the defined fields.


// ServiceOrchestratorSpec holds the desired state of the ServiceOrchestrator (from the client).
type ServiceOrchestratorSpec struct {
StageTarget `json:",inline"`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stages in transition indicates that these should be status fields?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are fields service orchestrator will set for each stage. They are specs.

@houshengbo houshengbo force-pushed the add-service-autoscaler branch from a2ead19 to 8dec861 Compare November 3, 2023 15:02
spec:
group: serving.knative.dev
names:
kind: ServiceOrchestrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a better name is RolloutOrchestrator ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@houshengbo houshengbo force-pushed the add-service-autoscaler branch from 8dec861 to 2e3cb2f Compare November 10, 2023 03:46
@houshengbo houshengbo changed the title Added the new custom resources ServiceOrchestrator&StagePodAutoscaler for rolling upgrade Added the new custom resources RolloutOrchestrator&StagePodAutoscaler for rolling upgrade Nov 10, 2023
@houshengbo houshengbo force-pushed the add-service-autoscaler branch from 23b6243 to ca778d9 Compare November 10, 2023 22:50
@houshengbo houshengbo force-pushed the add-service-autoscaler branch from ca778d9 to eb88382 Compare November 10, 2023 23:03
@houshengbo houshengbo force-pushed the add-service-autoscaler branch from 4be398b to 6f80d02 Compare November 15, 2023 00:53
@yuzisun
Copy link

yuzisun commented Nov 16, 2023

/lgtm
/approve

@knative-prow knative-prow bot added the lgtm Indicates that a PR is ready to be merged. label Nov 16, 2023
Copy link

knative-prow bot commented Nov 16, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: houshengbo, yuzisun

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow knative-prow bot merged commit f14dd6d into knative-extensions:main Nov 16, 2023
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Added the new CRDs and CRs to implement the rolling upgrade
2 participants