-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added the new custom resources RolloutOrchestrator&StagePodAutoscaler for rolling upgrade #12
Added the new custom resources RolloutOrchestrator&StagePodAutoscaler for rolling upgrade #12
Conversation
@houshengbo: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #12 +/- ##
===========================================
+ Coverage 21.31% 72.16% +50.85%
===========================================
Files 9 5 -4
Lines 61 97 +36
===========================================
+ Hits 13 70 +57
+ Misses 48 27 -21 ☔ View full report in Codecov by Sentry. |
42012aa
to
ff06df9
Compare
type StagePodAutoscalerSpec struct { | ||
// MinScale sets the lower bound for the number of the replicas. | ||
// +optional | ||
MinScale *int32 `json:"minScale,omitempty"` | ||
|
||
// MaxScale sets the upper bound for the number of the replicas. | ||
// +optional | ||
MaxScale *int32 `json:"maxScale,omitempty"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this ? min/max scale can be got from the service or revision spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not quite, the service and revision can only determine the ultimate min and max scale. The min and max scale here are for the current stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The min/max for each stage shouldn't be different as it is specified by user right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They both change from stage to stage, in order to control the number of replicas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then it is not min/max ? Min/max should stay the same for each revision as this is the user input, from what I understand you update this number for each stage and eventually the number of replica can reach to min or max. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes.
DesiredScale *int32 `json:"desiredScale,omitempty"` | ||
|
||
// ActualScale shows the actual number of replicas for the revision. | ||
ActualScale *int32 `json:"actualScale,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These fields already exist on KPA autoscaler, can we update KPA status object instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Knative already create too many child custom resources , let's try not creating new ones unless we really need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the good point, but I really need this ActualScale in the status of this stagepodautoscaler.
This reason is as below:
This change in podautoscaler is only able to kick-off the reconcile loop of the revision. However, I need any change in podautoscaler to kick off the reconcile loop of the service orchestrator.
So I have to use podautoscaler to kick off the reconcile loop of stagepodautoscaler, since they share the same name. And with the change of ActualScale
in the stagepodautoscaler to kick off the reconcile loop of the service orchestrator.
// TargetRevisions holds the information of the target revisions in the final stage. | ||
// These entries will always contain RevisionName references. | ||
// +optional | ||
TargetRevisions []TargetRevision `json:"targetRevisions,omitempty"` | ||
|
||
// InitialRevisions holds the information of the initial revisions in the initial stage. | ||
// These entries will always contain RevisionName references. | ||
// +optional | ||
InitialRevisions []TargetRevision `json:"initialRevisions,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you help document an example how InitialRevision
and TargetRevision
are used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
- When there is no revision available, the InitialRevisions will be empty, and the TargetRevisions will be empty.
- When there is an existing revision, and we create a new revision, InitialRevisions will be set to the existing revision with 100% traffic, and TargetRevisions will be set to the new revision with 100% traffic.
- During the transition from the old to new revision, TargetRevisions and InitialRevisions remain the same as TargetRevisions pointing to the new revision of the ultimate goal, and InitialRevision pointing to the old revision of the initial status.
- When the transition is over, InitialRevisions and TargetRevisions will be reset to empty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, good to leave this as comments on the defined fields.
|
||
// ServiceOrchestratorSpec holds the desired state of the ServiceOrchestrator (from the client). | ||
type ServiceOrchestratorSpec struct { | ||
StageTarget `json:",inline"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stages in transition indicates that these should be status fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are fields service orchestrator will set for each stage. They are specs.
a2ead19
to
8dec861
Compare
spec: | ||
group: serving.knative.dev | ||
names: | ||
kind: ServiceOrchestrator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a better name is RolloutOrchestrator
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
8dec861
to
2e3cb2f
Compare
23b6243
to
ca778d9
Compare
ca778d9
to
eb88382
Compare
4be398b
to
6f80d02
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: houshengbo, yuzisun The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Changes
/kind
Fixes #13
Release Note
Docs