-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add system config consensus to deprecate clique #1102
base: develop
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
d87b55d
to
f438d7b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- this seems nicely testable in unit tests by mocking the
EthClient
and returning the values you expect for signing and verifying so you should be able to cover most of the code. - what about the block hash? don't we need to exclude the
ExtraData
from the block hash calculation so that we can create the same hashes when reconstructing blocks from L1? - how does an upgrade work with the new consensus mechanism? this is a mandatory upgrade, so node operators need to upgrade before a hard fork. but how exactly do we support and switch the consensus mechanism from clique to system contract based on a fork given in the config?
s.lock.RLock() | ||
defer s.lock.RUnlock() | ||
transitions := s.signerAddressesL1 | ||
idx := sort.Search(len(transitions), func(i int) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to me this looks like an overkill and a too comlicated for what it needs to do. if we have a DESC sorted list based on startBlock, then we can just iterate through it and if blockNum > startBlock
we found the signer. the list should be short enough and in most executions (the node is synced and live) it will find the correct singer in the first loop iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I will make the list DESC sorted when inserting (l1 contract PR). Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: no need to do this as we are going for now with one single signer.
} | ||
|
||
if err := s.correctSigner(header); err != nil { | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be good to wrap the error somewhere in the trace to identify it easier
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks.
case <-s.ctx.Done(): | ||
return | ||
case <-syncTicker.C: | ||
s.lock.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should only grab the lock if we really need to update the signers. otherwise imagine in fetchAllSigners
we get a RPC timeout. in the meantime we can't verify any block and the entire node basically freezes for seconds for this operation that most likely won't do anything.
That means: lock only on the update and I'd even say get and compare the values before touching the write lock so that the time and interruption in normal node flow is as little as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Thanks. Updated.
) | ||
|
||
const ( | ||
defaultSyncInterval = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defaultSyncInterval = 10 | |
defaultSyncInterval = 10 * time.Second |
if you use it like it is in syncTicker := time.NewTicker(defaultSyncInterval)
this means it is time.Nanosecond
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Rejecting suggestion but committing this with other suggestions at once.
s.lock.Lock() | ||
signers, err := s.fetchAllSigners(nil) // nil -> latest state | ||
if err != nil { | ||
log.Error("failed to get signer address from L1 System Contract", "err", err) | ||
} | ||
s.signerAddressesL1 = signers | ||
s.lock.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this should even be done synchronously on startup and fail the node if it fails? if the node can't read the config it won't be able to do anything no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It won't, but it will try again in 10 seconds?
Thanks for the comments @jonastheis , addressed them except for the one on failing the node on Start (commented there in the convo) |
7774ca6
to
a77524f
Compare
Bringing back the implementation with only one signer, see discussion here. |
1. Purpose or design rationale of this PR
Add a new consensus to deprecate clique
Project doc
...
2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?