Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dafny latest proof progress WIP #3009

Open
wants to merge 38 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
dd3af46
Manual merge with latest proof additions.
HristoStaykov Jan 9, 2023
35cd764
Code restructuring for clarity and better automation handling.
HristoStaykov Jan 11, 2023
2350872
Progress in the proof.
HristoStaykov Jan 12, 2023
8b87dc8
Additions to the global invariant.
HristoStaykov Jan 16, 2023
e6c7896
Fix for sending PrePrepare.
HristoStaykov Jan 18, 2023
42f486b
Added CurrentNewViewMsg predicate.
HristoStaykov Jan 20, 2023
3829d98
Added RecordedViewChangeMsgsCameFromNetwork predicate to the global I…
HristoStaykov Jan 20, 2023
cb60947
Additions to the replica model.
HristoStaykov Jan 25, 2023
6610140
Added predicates to global Invariant.
HristoStaykov Jan 25, 2023
77fb0bc
Fixes regarding indexing of Stable Checkpoints.
HristoStaykov Jan 30, 2023
c8a658a
Code restructuring.
HristoStaykov Jan 30, 2023
0067a3b
Lemma HonestPreservesRecordedViewChangeMsgsAreValid verified.
HristoStaykov Feb 3, 2023
1c8c602
Addition of new Lemma and Inv predicate.
HristoStaykov Feb 6, 2023
d3f79ca
Need to introduce Commit Certificate.
HristoStaykov Feb 27, 2023
783a3cf
Initial steps for proving HonestPreservesEveryCommitMsgIsRememberedBy…
HristoStaykov Mar 1, 2023
1183fdf
HonestPreservesEveryCommitMsgIsRememberedByItsSender proven for trivi…
HristoStaykov Mar 2, 2023
b476f58
Debugging session.
HristoStaykov Mar 10, 2023
ca863a1
Prove for HonestPreservesEveryCommitMsgIsRememberedByItsSenderForComm…
HristoStaykov Mar 13, 2023
8a50c2f
Proof cleanup - removed debugging code.
HristoStaykov Mar 13, 2023
f081492
Added lemma HonestPreservesEveryCommitMsgIsRememberedByItsSenderForRe…
HristoStaykov Mar 15, 2023
c0af5bd
In progress debugging of HonestPreservesEveryCommitMsgIsRememberedByI…
HristoStaykov Mar 17, 2023
3aef91f
Proof for certificate.valid complete in case commitMsg.payload.view <…
HristoStaykov Mar 21, 2023
497dd68
Progress on proving HonestPreservesEveryCommitMsgIsRememberedByItsSen…
HristoStaykov Mar 22, 2023
2090276
HonestPreservesEveryCommitMsgIsSupportedByAQuorumOfRecordedPrepares p…
HristoStaykov Mar 23, 2023
eff1eda
Proof that sets extracted form the Working window have consistent car…
HristoStaykov Mar 27, 2023
3cd7d9f
Code cleanup.
HristoStaykov Mar 29, 2023
30ea07c
Progress on prooving HonestPreservesViewChangeMsgsFromHonestInNetwork…
HristoStaykov Mar 30, 2023
0d17ebe
Progress on HonestRecvPrePrepareStepPreservesUnCommitableAgreesWithRe…
HristoStaykov Mar 30, 2023
1ad07f8
Progress on proof for HonestRecvPrePrepareStepPreservesUnCommitableAg…
HristoStaykov Apr 3, 2023
80fefac
Hints added for next steps.
HristoStaykov Apr 10, 2023
52b3a75
Minor fixes and hints for next steps.
HristoStaykov Apr 12, 2023
a3fba37
Refactor RecordedNewViewMsgsContainSentVCMsgs.
HristoStaykov Apr 25, 2023
59867cb
Refactor HonestRecvPrePrepareStepPreservesUnCommitableAgreesWithRecor…
HristoStaykov Apr 25, 2023
0f0d9a9
Split the reasoning in HonestRecvPrePrepareStepPreservesUnCommitableA…
HristoStaykov Apr 27, 2023
8138f5f
HonestRecvPrePrepareStepPreservesUnCommitableAgreesWithRecordedPrePre…
HristoStaykov May 2, 2023
cd6b389
Progress in proving UnCommitableIfNoCertificates.
HristoStaykov May 3, 2023
928f30e
Summary for progress so far.
HristoStaykov May 4, 2023
7325914
Insert assume false-s where the proof is in WIP.
HristoStaykov May 17, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 27 additions & 3 deletions docs/sbft-formal-model/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Project overview
## Project overview

The aim of this sub-project is to both document and to formally specify the concord-bft state machine replication (SMR) protocol.

Given the distributed nature of the concord-bft protocol, unit tests offer limited means of establishing the desired state machine replication (SMR) safety properties. The main tool we use for proving correctness is an integration testing framework code-named [Apollo](https://github.com/vmware/concord-bft/tree/master/tests/apollo) which creates mini-blockchain deployments and exercises various runtime scenarios (including straight case, replica failures, network faults, as well as certain byzantine cases).
Expand All @@ -7,7 +8,7 @@ Apollo does give us good confidence as to the correctness and fault tolerance of

We use Dafny and first-order logic to model and prove the safety properties of the protocol's core state machine (liveness properties are out of scope for now).

# Dev instructions
## Dev instructions

The Dafny model can be used as a document, for proving SBFT correctness properties, or as input for property-based tests of the implementation's core components.

Expand All @@ -27,6 +28,7 @@ newgrp docker
```

## Pull the image.

(This is the slow thing; it includes a couple GB of Ubuntu.)

```bash
Expand Down Expand Up @@ -69,6 +71,28 @@ If everything is working, you should see something like:
Dafny program verifier finished with 10 verified, 0 errors
```

# Acknowledgements and references
## Acknowledgements and references

Special thanks to [@jonhnet](https://github.com/jonhnet) for his help and support along the way.

## Current status

# We have completed the modeling of *replica.i.dfy* covering the following parts of the protocol:

1. Slow Commit Path - the 2 phase commit path that requires collecting Prepared Certificate (2F+1 matching Prepares) and a Committed Certificate (2F+1 Commit messages).
2. View change - the mechanism that the system uses to replace the leader (Primary) in order to preserve livenes.
3. Sliding of the Working Window and Checkpointing - periodically the replicas exchange Checkpoint messages to reach agreement on the state. This also enables them to slide the Working Window of Sequence ID-s that can be considered for consensus in each replica. The checkpointing is crucial also for enabling replicas to catch up in case of long down time of a given replica. The Checkpoint represents a summary of the state (a hash in the implementation code and the entire state itself in the model) that can be used for correctness check when performing State Transfer.
4. State Transfer - the mechanism for replicas to help peers that have been disconnected for long enough to be unable to catch up because the system's Working Window has advanced beyond the particular replica's Working Window. This is a process that in the implementation code transfers portions of the state until catching up is completed in the behind replica. In the model we have the entire state in each Checkpoint message, therefore we can perform State Transfer once we collect quorum (2F+1) matching Checkpoint messages to the state in any of those messages (they have to be equal) if this state turns out to be more recent than our current state. We determine recentness based to the highest Sequence ID for which the system has reached consensus.

# Modeling of malicious behavior in the system *faulty_replica.i.dfy*.

To achieve validation in the presence of malicious behavior once we start writing the proof we needed a subset of the replicas (up to F) to act in a way that could try to corrupt the system. We achieve this by not restricting the F faulty replicas' actions in any specific way, thus allowing completely arbitrary behavior which can be considered as a super set of the Malicious behavior, which itself is a super set of the correct (honest) behavior that follows the protocol steps.

# Message signatures checks support in *network.s.dfy*.

In order for us to model signatures we have used the network to be a trusted arbitrary. It stores all the previously sent messages and can determine if a message really originated form a given host. The network rejects messages from hosts that try to impersonate another one. We also provide a mechanism to validate sub messages (some of the protocol messages are composite - carrying messages collected form peers in the model and hashes of those messages in the implementation code).

# Progress on the proof.

We have completed the proof that commit messages form honest senders agree in a given View, put otherwise, that consensus is reached in a View. This is a stepping stone for us to be able to prove consensus in the system is maintained even in the presence of multiple View Changes and at most F faulty replicas. The next milestone in proving UnCommitableAgreesWithPrepare. This predicate serves to bind the future and the past, showing that if an Hones Replica sends a Prepare message in a given View (effectively voting for a proposal from the primary for assigning a Client operation to a Sequence ID), it has performed all the necessary checks to validate that nothing else is commitable in prior views.
For the current state of the proof we have disabled sliding of the Working Window and Checkpointing in order to split the effort in smaller steps.
8 changes: 8 additions & 0 deletions docs/sbft-formal-model/cluster_config.s.dfy
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ include "network.s.dfy"
module ClusterConfig {
import opened HostIdentifiers

type ViewNum = nat

datatype Constants = Constants(
maxByzantineFaultyReplicas:nat,
numClients:nat,
Expand All @@ -32,6 +34,12 @@ module ClusterConfig {
N() + numClients
}

function PrimaryForView(view:ViewNum) : nat
requires WF()
{
view % N()
}

function F() : nat
requires WF()
{
Expand Down
21 changes: 21 additions & 0 deletions docs/sbft-formal-model/library/Library.dfy
Original file line number Diff line number Diff line change
@@ -1,4 +1,25 @@
module Library {
lemma MappedSetCardinality<T, V>(p:set<T>, f:(T)-->V, filtered:set<V>)
requires forall t | t in p :: f.requires(t)
requires forall x, y | && x in p
&& y in p
&& f(x) == f(y)
:: x == y
//requires filtered == set t | t in p :: f(t) // TODO: bug in Dafny makes this un-triggerable. Uncomment when we find a workaround!
ensures |filtered| == |p|
{
assert filtered == set t | t in p :: f(t) by {
assume false; // TODO: temporary fix for triggering problem in Dafny.
}
if |p| != 0 {
var t0 :| t0 in p;
var sub_p := p - {t0};
MappedSetCardinality(sub_p, f, set t | t in sub_p :: f(t));
assert p == sub_p + {t0}; // Trigger extentionallity
assert (set t | t in p :: f(t)) == (set t | t in sub_p :: f(t)) + {f(t0)}; // Trigger extentionallity
}
}

function DropLast<T>(theSeq: seq<T>) : seq<T>
requires 0 < |theSeq|
{
Expand Down
105 changes: 81 additions & 24 deletions docs/sbft-formal-model/messages.dfy
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,15 @@

include "network.s.dfy"
include "library/Library.dfy"
include "cluster_config.s.dfy"

module Messages {
import Library
import opened HostIdentifiers
import Network
import opened ClusterConfig

type SequenceID = k:nat | 0 < k witness 1
type ViewNum = nat
type SequenceID = nat

datatype ClientOperation = ClientOperation(sender:HostId, timestamp:nat)

Expand All @@ -40,26 +41,42 @@ module Messages {
predicate WF() {
(forall v | v in votes :: v.payload.Prepare?)
}
predicate valid(quorumSize:nat) {
predicate validFull(clusterConfig:ClusterConfig.Constants, seqID:SequenceID)
requires clusterConfig.WF()
{
&& |votes| >= clusterConfig.AgreementQuorum()
&& WF()
&& prototype().seqID == seqID
&& (forall v | v in votes :: v.payload == prototype()) // messages have to be votes that match eachother by the prototype
&& UniqueSenders(votes)
&& (forall v | v in votes :: clusterConfig.IsReplica(v.sender))
}
predicate valid(clusterConfig:ClusterConfig.Constants, seqID:SequenceID)
requires clusterConfig.WF()
{
|| empty()
|| (&& |votes| == quorumSize
&& WF()
&& (forall v | v in votes :: v.payload == prototype()) // messages have to be votes that match eachother by the prototype
&& UniqueSenders(votes))
|| validFull(clusterConfig, seqID)
}
predicate empty() {
&& |votes| == 0
}
predicate votesRespectView(view:ViewNum)
requires WF()
{
!empty() ==> prototype().view < view
}
}

datatype ViewChangeMsgsSelectedByPrimary = ViewChangeMsgsSelectedByPrimary(msgs:set<Network.Message<Message>>) {
predicate valid(view:ViewNum, quorumSize:nat) {
predicate valid(view:ViewNum, clusterConfig:ClusterConfig.Constants)
requires clusterConfig.WF()
{
&& |msgs| > 0
&& (forall v | v in msgs :: && v.payload.ViewChangeMsg?
&& v.payload.validViewChangeMsg(quorumSize)
&& v.payload.validViewChangeMsg(clusterConfig)
&& v.payload.newView == view) // All the ViewChange messages have to be for the same View.
&& UniqueSenders(msgs)
&& |msgs| == quorumSize //TODO: once proof is complete try with >=
&& |msgs| == clusterConfig.AgreementQuorum() //TODO: once proof is complete try with >=
}
}

Expand All @@ -71,12 +88,14 @@ module Messages {
prot.payload
}
predicate valid(lastStableCheckpoint:SequenceID, quorumSize:nat) {
&& |msgs| > 0
&& UniqueSenders(msgs)
&& (forall m | m in msgs :: && m.payload.CheckpointMsg?
&& m.payload == prototype()
&& m.payload.seqIDReached == lastStableCheckpoint)
&& |msgs| >= quorumSize
|| (&& lastStableCheckpoint == 0
&& |msgs| == 0)
|| (&& |msgs| > 0
&& UniqueSenders(msgs)
&& (forall m | m in msgs :: && m.payload.CheckpointMsg?
&& m.payload == prototype()
&& m.payload.seqIDReached == lastStableCheckpoint)
&& |msgs| >= quorumSize)
}
}

Expand All @@ -87,7 +106,7 @@ module Messages {
:: m1.sender != m2.sender)
}

// Define your Message datatype here.
// Define your Message datatype here. // TODO: rename to payload.
datatype Message = | PrePrepare(view:ViewNum, seqID:SequenceID, operationWrapper:OperationWrapper)
| Prepare(view:ViewNum, seqID:SequenceID, operationWrapper:OperationWrapper)
| Commit(view:ViewNum, seqID:SequenceID, operationWrapper:OperationWrapper)
Expand All @@ -99,26 +118,64 @@ module Messages {
| NewViewMsg(newView:ViewNum, vcMsgs:ViewChangeMsgsSelectedByPrimary)
| CheckpointMsg(seqIDReached:SequenceID, committedClientOperations:CommittedClientOperations)
{
predicate valid(quorumSize:nat)
predicate valid(clusterConfig:ClusterConfig.Constants)
requires clusterConfig.WF()
{
&& (ViewChangeMsg? ==> validViewChangeMsg(quorumSize))
&& (NewViewMsg? ==> validNewViewMsg(quorumSize))
&& (ViewChangeMsg? ==> validViewChangeMsg(clusterConfig))
&& (NewViewMsg? ==> validNewViewMsg(clusterConfig))
}
predicate validViewChangeMsg(quorumSize:nat)
predicate checked(clusterConfig:ClusterConfig.Constants, sigChecks:set<Network.Message<Message>>)
requires clusterConfig.WF()
{
&& (ViewChangeMsg? ==> checkedViewChangeMsg(clusterConfig, sigChecks))
&& (NewViewMsg? ==> checkedNewViewMsg(clusterConfig, sigChecks))
}
predicate validViewChangeMsg(clusterConfig:ClusterConfig.Constants)
requires clusterConfig.WF()
requires ViewChangeMsg?
{
&& (forall seqID | seqID in certificates
:: && certificates[seqID].valid(clusterConfig, seqID)
&& certificates[seqID].votesRespectView(newView))
&& proofForLastStable.valid(lastStableCheckpoint, clusterConfig.AgreementQuorum())
}
predicate validNewViewMsg(clusterConfig:ClusterConfig.Constants)
requires clusterConfig.WF()
requires NewViewMsg?
{
&& vcMsgs.valid(newView, clusterConfig)
&& (forall replica | replica in sendersOf(vcMsgs.msgs)
:: clusterConfig.IsReplica(replica))
}
predicate checkedViewChangeMsg(clusterConfig:ClusterConfig.Constants, sigChecks:set<Network.Message<Message>>)
requires ViewChangeMsg?
requires clusterConfig.WF()
{
proofForLastStable.valid(lastStableCheckpoint, quorumSize)
&& valid(clusterConfig)
// Check Checkpoint msg-s signatures:
&& proofForLastStable.msgs <= sigChecks
&& (forall seqID | seqID in certificates
:: && certificates[seqID].votes <= sigChecks)
}
predicate validNewViewMsg(quorumSize:nat)
predicate checkedNewViewMsg(clusterConfig:ClusterConfig.Constants, sigChecks:set<Network.Message<Message>>)
requires NewViewMsg?
requires clusterConfig.WF()
{
vcMsgs.valid(newView, quorumSize)
&& valid(clusterConfig)
&& vcMsgs.msgs <= sigChecks
}
predicate IsIntraViewMsg() {
|| PrePrepare?
|| Prepare?
|| Commit?
}
}
predicate ValidNewViewMsg(clusterConfig:ClusterConfig.Constants, msg:Network.Message<Message>)
{
&& clusterConfig.WF()
&& msg.payload.NewViewMsg?
&& msg.payload.valid(clusterConfig)
&& clusterConfig.PrimaryForView(msg.payload.newView) == msg.sender
}
// ToDo: ClientReply
}
Loading