Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix reorg and improve continuity checks #1614

Open
wants to merge 42 commits into
base: dev_2-0
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
dbf7ea0
added sync status test
Oscar-Pepper Dec 23, 2024
83be516
added sync status with test
Oscar-Pepper Dec 23, 2024
09f28e1
Merge branch 'sync_integration_pt1' into sync_status
Oscar-Pepper Dec 24, 2024
3768c5b
resolve conflicts and re-organise
Oscar-Pepper Dec 24, 2024
25ece4d
set new grpc method to debug level logging
Oscar-Pepper Dec 24, 2024
36aabd9
Merge branch 'sync_integration_pt1' into sync_status
Oscar-Pepper Dec 26, 2024
d4e2a9c
move forming of vec of subtree roots earlier to use for batching
Oscar-Pepper Dec 26, 2024
52c80ac
improve comments
Oscar-Pepper Dec 26, 2024
f72e996
fix clippy
Oscar-Pepper Dec 26, 2024
597caec
fix conflicts
Oscar-Pepper Dec 26, 2024
a0281c6
implemented batch by shards
Oscar-Pepper Dec 27, 2024
c23b617
fixed punch scan priority
Oscar-Pepper Dec 28, 2024
628f9b4
fix doc warnings
Oscar-Pepper Dec 28, 2024
4a44cdf
fix whitespace
Oscar-Pepper Dec 30, 2024
0d20cf1
implement logic for scanning sapling shard ranges on spend detection
Oscar-Pepper Dec 30, 2024
f74ba3a
set found note shard priorities
Oscar-Pepper Dec 30, 2024
2c4c522
add logic for shard boundaries
Oscar-Pepper Dec 30, 2024
072430c
fix clippy
Oscar-Pepper Dec 30, 2024
28c7c18
remove debug
Oscar-Pepper Dec 30, 2024
fec01ed
fix reorg bug and add end seam block for improved continuity checks
Oscar-Pepper Dec 31, 2024
627769a
fix clippy and combine located tree builds into one spawn blocking
Oscar-Pepper Dec 31, 2024
feb5d2a
revisit wallet data cleanup
Oscar-Pepper Dec 31, 2024
6d4daee
clear locators
Oscar-Pepper Dec 31, 2024
0a21d09
fix bug in
Oscar-Pepper Dec 31, 2024
6175ba0
add max re-org window
Oscar-Pepper Dec 31, 2024
03bbbc8
remove todo
Oscar-Pepper Dec 31, 2024
7eae9d6
solved conflicts with dev_2-0
Oscar-Pepper Jan 7, 2025
ac4d743
removed sync feature from libtonode
Oscar-Pepper Jan 7, 2025
9666f33
Merge branch 'sync_status' into shard_ranges
Oscar-Pepper Jan 7, 2025
b561545
Merge branch 'shard_ranges' into fix_reorg_and_improve_continuity_checks
Oscar-Pepper Jan 7, 2025
790f417
added outputs to initial sync state
Oscar-Pepper Jan 15, 2025
b8b2aca
updated sync status to include outputs
Oscar-Pepper Jan 16, 2025
a18ec66
format
Oscar-Pepper Jan 16, 2025
f792789
solve merge conflicts with sync_status branch changes
Oscar-Pepper Jan 16, 2025
312f5c8
solve merge conflicts with shard_ranges branch changes
Oscar-Pepper Jan 16, 2025
23f4442
retain all scanned ranges boundary blocks in the wallet
Oscar-Pepper Jan 16, 2025
45c95e6
fix overflow bugs
Oscar-Pepper Jan 16, 2025
fb2685a
Merge branch 'sync_status' into shard_ranges
Oscar-Pepper Jan 16, 2025
5c34d84
Merge branch 'shard_ranges' into fix_reorg_and_improve_continuity_checks
Oscar-Pepper Jan 16, 2025
20c0eb4
fix case where block height is lower than activation height when dete…
Oscar-Pepper Jan 17, 2025
acfd0af
Merge branch 'shard_ranges' into fix_reorg_and_improve_continuity_checks
Oscar-Pepper Jan 17, 2025
ee41d5d
corrected boundaries to bounds
Oscar-Pepper Jan 28, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions libtonode-tests/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,13 @@ edition = "2021"
[features]
chain_generic_tests = []
ci = ["zingolib/ci"]
sync = ["dep:zingo-sync"]

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
zingolib = { path = "../zingolib", features = [ "deprecations", "test-elevation" ] }
zingo-status = { path = "../zingo-status" }
zingo-netutils = { path = "../zingo-netutils" }
zingo-sync = { path = "../zingo-sync", optional = true }
zingo-sync = { path = "../zingo-sync" }
testvectors = { path = "../testvectors" }

bip0039.workspace = true
Expand Down
57 changes: 55 additions & 2 deletions libtonode-tests/tests/sync.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
use std::time::Duration;

use tempfile::TempDir;
use testvectors::seeds::HOSPITAL_MUSEUM_SEED;
use zingo_netutils::GrpcConnector;
use zingo_sync::sync::sync;
use zingo_sync::sync::{self, sync};
use zingolib::{
config::{construct_lightwalletd_uri, load_clientconfig, DEFAULT_LIGHTWALLETD_SERVER},
get_base_address_macro,
Expand All @@ -10,7 +12,7 @@ use zingolib::{
wallet::WalletBase,
};

#[ignore = "too slow, and flakey"]
#[ignore = "temporary mainnet test for sync development"]
#[tokio::test]
async fn sync_mainnet_test() {
rustls::crypto::ring::default_provider()
Expand Down Expand Up @@ -49,6 +51,57 @@ async fn sync_mainnet_test() {
dbg!(&wallet.sync_state);
}

#[ignore = "mainnet test for large chain"]
#[tokio::test]
async fn sync_status() {
rustls::crypto::ring::default_provider()
.install_default()
.expect("Ring to work as a default");
tracing_subscriber::fmt().init();

let uri = construct_lightwalletd_uri(Some(DEFAULT_LIGHTWALLETD_SERVER.to_string()));
let temp_dir = TempDir::new().unwrap();
let temp_path = temp_dir.path().to_path_buf();
let config = load_clientconfig(
uri.clone(),
Some(temp_path),
zingolib::config::ChainType::Mainnet,
true,
)
.unwrap();
let lightclient = LightClient::create_from_wallet_base_async(
WalletBase::from_string(HOSPITAL_MUSEUM_SEED.to_string()),
&config,
2_750_000,
// 2_670_000,
true,
)
.await
.unwrap();

let client = GrpcConnector::new(uri).get_client().await.unwrap();

let wallet = lightclient.wallet.clone();
let sync_handle = tokio::spawn(async move {
sync(client, &config.chain, wallet).await.unwrap();
});

let wallet = lightclient.wallet.clone();
tokio::spawn(async move {
loop {
let wallet = wallet.clone();
let sync_status = sync::sync_status(wallet).await;
dbg!(sync_status);
tokio::time::sleep(Duration::from_secs(1)).await;
}
});

sync_handle.await.unwrap();

dbg!(&lightclient.wallet.lock().await.wallet_blocks);
}

// temporary test for sync development
#[tokio::test]
async fn sync_test() {
tracing_subscriber::fmt().init();
Expand Down
13 changes: 9 additions & 4 deletions zingo-sync/src/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ pub async fn get_subtree_roots(
start_index: u32,
shielded_protocol: i32,
max_entries: u32,
) -> Result<tonic::Streaming<SubtreeRoot>, ()> {
) -> Result<Vec<SubtreeRoot>, ()> {
let (reply_sender, reply_receiver) = oneshot::channel();
fetch_request_sender
.send(FetchRequest::GetSubtreeRoots(
Expand All @@ -103,8 +103,13 @@ pub async fn get_subtree_roots(
max_entries,
))
.unwrap();
let shards = reply_receiver.await.unwrap();
Ok(shards)
let mut subtree_root_stream = reply_receiver.await.unwrap();
let mut subtree_roots = Vec::new();
while let Some(subtree_root) = subtree_root_stream.message().await.unwrap() {
subtree_roots.push(subtree_root);
}

Ok(subtree_roots)
}

/// Gets the frontiers for a specified block height.
Expand Down Expand Up @@ -188,7 +193,7 @@ pub async fn get_transparent_address_transactions(
pub async fn get_mempool_transaction_stream(
client: &mut CompactTxStreamerClient<zingo_netutils::UnderlyingService>,
) -> Result<tonic::Streaming<RawTransaction>, ()> {
tracing::info!("Fetching mempool stream");
tracing::debug!("Fetching mempool stream");
let mempool_stream = fetch::get_mempool_stream(client).await.unwrap();

Ok(mempool_stream)
Expand Down
14 changes: 7 additions & 7 deletions zingo-sync/src/client/fetch.rs
Original file line number Diff line number Diff line change
Expand Up @@ -103,17 +103,17 @@ async fn fetch_from_server(
) -> Result<(), ()> {
match fetch_request {
FetchRequest::ChainTip(sender) => {
tracing::info!("Fetching chain tip.");
tracing::debug!("Fetching chain tip.");
let block_id = get_latest_block(client).await.unwrap();
sender.send(block_id).unwrap();
}
FetchRequest::CompactBlockRange(sender, block_range) => {
tracing::info!("Fetching compact blocks. {:?}", &block_range);
tracing::debug!("Fetching compact blocks. {:?}", &block_range);
let compact_blocks = get_block_range(client, block_range).await.unwrap();
sender.send(compact_blocks).unwrap();
}
FetchRequest::GetSubtreeRoots(sender, start_index, shielded_protocol, max_entries) => {
tracing::info!(
tracing::debug!(
"Fetching subtree roots. start index: {}. shielded protocol: {}",
start_index,
shielded_protocol
Expand All @@ -124,19 +124,19 @@ async fn fetch_from_server(
sender.send(shards).unwrap();
}
FetchRequest::TreeState(sender, block_height) => {
tracing::info!("Fetching tree state. {:?}", &block_height);
tracing::debug!("Fetching tree state. {:?}", &block_height);
let tree_state = get_tree_state(client, block_height).await.unwrap();
sender.send(tree_state).unwrap();
}
FetchRequest::Transaction(sender, txid) => {
tracing::info!("Fetching transaction. {:?}", txid);
tracing::debug!("Fetching transaction. {:?}", txid);
let transaction = get_transaction(client, consensus_parameters, txid)
.await
.unwrap();
sender.send(transaction).unwrap();
}
FetchRequest::UtxoMetadata(sender, (addresses, start_height)) => {
tracing::info!(
tracing::debug!(
"Fetching unspent transparent output metadata from {:?} for addresses:\n{:?}",
&start_height,
&addresses
Expand All @@ -147,7 +147,7 @@ async fn fetch_from_server(
sender.send(utxo_metadata).unwrap();
}
FetchRequest::TransparentAddressTxs(sender, (address, block_range)) => {
tracing::info!(
tracing::debug!(
"Fetching raw transactions in block range {:?} for address {:?}",
&block_range,
&address
Expand Down
138 changes: 128 additions & 10 deletions zingo-sync/src/primitives.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
//! Module for primitive structs associated with the sync engine

use std::collections::{BTreeMap, BTreeSet};
use std::{
collections::{BTreeMap, BTreeSet},
ops::Range,
};

use getset::{CopyGetters, Getters, MutGetters, Setters};

Expand All @@ -21,27 +24,84 @@ use crate::{
utils,
};

/// Block height and txid of relevant transactions that have yet to be scanned. These may be added due to spend
/// detections or transparent output discovery.
/// Block height and txid of relevant transactions that have yet to be scanned. These may be added due transparent
/// output/spend discovery or for targetted rescan.
pub type Locator = (BlockHeight, TxId);

/// Initial sync state.
///
/// All fields will be reset when a new sync session starts.
#[derive(Debug, Clone, CopyGetters, Setters)]
#[getset(get_copy = "pub", set = "pub")]
pub struct InitialSyncState {
/// One block above the fully scanned wallet height at start of sync session.
sync_start_height: BlockHeight,
/// The tree sizes of the fully scanned height and chain tip at start of sync session.
sync_tree_bounds: TreeBounds,
/// Total number of blocks to scan.
total_blocks_to_scan: u32,
/// Total number of sapling outputs to scan.
total_sapling_outputs_to_scan: u32,
/// Total number of orchard outputs to scan.
total_orchard_outputs_to_scan: u32,
}

impl InitialSyncState {
/// Create new InitialSyncState
pub fn new() -> Self {
InitialSyncState {
sync_start_height: 0.into(),
sync_tree_bounds: TreeBounds {
sapling_initial_tree_size: 0,
sapling_final_tree_size: 0,
orchard_initial_tree_size: 0,
orchard_final_tree_size: 0,
},
total_blocks_to_scan: 0,
total_sapling_outputs_to_scan: 0,
total_orchard_outputs_to_scan: 0,
}
}
}

impl Default for InitialSyncState {
fn default() -> Self {
Self::new()
}
}

/// Encapsulates the current state of sync
#[derive(Debug, Getters, MutGetters)]
#[derive(Debug, Clone, Getters, MutGetters, CopyGetters, Setters)]
#[getset(get = "pub", get_mut = "pub")]
pub struct SyncState {
/// A vec of block ranges with scan priorities from wallet birthday to chain tip.
/// In block height order with no overlaps or gaps.
scan_ranges: Vec<ScanRange>,
/// The block ranges that contain all sapling outputs of complete sapling shards.
///
/// There is an edge case where a range may include two (or more) shards. However, this only occurs when the lower
/// shards are already scanned so will cause no issues when punching in the higher scan priorites.
sapling_shard_ranges: Vec<Range<BlockHeight>>,
/// The block ranges that contain all orchard outputs of complete orchard shards.
///
/// There is an edge case where a range may include two (or more) shards. However, this only occurs when the lower
/// shards are already scanned so will cause no issues when punching in the higher scan priorites.
orchard_shard_ranges: Vec<Range<BlockHeight>>,
/// Locators for relevant transactions to the wallet.
locators: BTreeSet<Locator>,
/// Initial sync state.
initial_sync_state: InitialSyncState,
}

impl SyncState {
/// Create new SyncState
pub fn new() -> Self {
SyncState {
scan_ranges: Vec::new(),
sapling_shard_ranges: Vec::new(),
orchard_shard_ranges: Vec::new(),
locators: BTreeSet::new(),
initial_sync_state: InitialSyncState::new(),
}
}

Expand All @@ -53,6 +113,8 @@ impl SyncState {
}

/// Returns the block height at which all blocks equal to and below this height are scanned.
///
/// Will panic if called before scan ranges are updated for the first time.
pub fn fully_scanned_height(&self) -> BlockHeight {
if let Some(scan_range) = self
.scan_ranges()
Expand All @@ -68,6 +130,41 @@ impl SyncState {
.end
}
}

/// Returns the highest block height that has been scanned.
///
/// If no scan ranges have been scanned, returns the block below the wallet birthday.
/// Will panic if called before scan ranges are updated for the first time.
pub fn highest_scanned_height(&self) -> BlockHeight {
if let Some(last_scanned_range) = self
.scan_ranges()
.iter()
.filter(|scan_range| scan_range.priority() == ScanPriority::Scanned)
.last()
{
last_scanned_range.block_range().end - 1
} else {
self.wallet_birthday()
.expect("scan ranges always non-empty")
- 1
}
}

/// Returns the wallet birthday or `None` if `self.scan_ranges` is empty.
///
/// If the wallet birthday is below the sapling activation height, returns the sapling activation height instead.
pub fn wallet_birthday(&self) -> Option<BlockHeight> {
self.scan_ranges()
.first()
.map(|range| range.block_range().start)
}

/// Returns the last known chain height to the wallet or `None` if `self.scan_ranges` is empty.
pub fn wallet_height(&self) -> Option<BlockHeight> {
self.scan_ranges()
.last()
.map(|range| range.block_range().end - 1)
}
}

impl Default for SyncState {
Expand All @@ -76,6 +173,30 @@ impl Default for SyncState {
}
}

#[derive(Debug, Clone, Copy)]
pub struct TreeBounds {
pub sapling_initial_tree_size: u32,
pub sapling_final_tree_size: u32,
pub orchard_initial_tree_size: u32,
pub orchard_final_tree_size: u32,
}

/// A snapshot of the current state of sync. Useful for displaying the status of sync to a user / consumer.
///
/// `percentage_outputs_scanned` is a much more accurate indicator of sync completion than `percentage_blocks_scanned`.
#[derive(Debug, Clone, Getters)]
pub struct SyncStatus {
pub scan_ranges: Vec<ScanRange>,
pub scanned_blocks: u32,
pub unscanned_blocks: u32,
pub percentage_blocks_scanned: f32,
pub scanned_sapling_outputs: u32,
pub unscanned_sapling_outputs: u32,
pub scanned_orchard_outputs: u32,
pub unscanned_orchard_outputs: u32,
pub percentage_outputs_scanned: f32,
}

/// Output ID for a given pool type
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy, CopyGetters)]
#[getset(get_copy = "pub")]
Expand Down Expand Up @@ -150,8 +271,7 @@ pub struct WalletBlock {
time: u32,
#[getset(skip)]
txids: Vec<TxId>,
sapling_commitment_tree_size: u32,
orchard_commitment_tree_size: u32,
tree_bounds: TreeBounds,
}

impl WalletBlock {
Expand All @@ -161,17 +281,15 @@ impl WalletBlock {
prev_hash: BlockHash,
time: u32,
txids: Vec<TxId>,
sapling_commitment_tree_size: u32,
orchard_commitment_tree_size: u32,
tree_bounds: TreeBounds,
) -> Self {
Self {
block_height,
block_hash,
prev_hash,
time,
txids,
sapling_commitment_tree_size,
orchard_commitment_tree_size,
tree_bounds,
}
}

Expand Down
Loading