jax-sync protocol.md

Back

File Information

Path: /notes/jax-sync protocol.md
Size: 47544 bytes
MIME Type: text/markdown

Content


## Overview

A gossip-based protocol for synchronizing encrypted bucket state across peers using content-addressed manifests with longest-chain consensus.

## Core Data Structures

### Manifest
```rust
struct Manifest {
    id: Uuid,                    // Bucket identifier (immutable)
    name: String,                // Human-readable name
    owner: PublicKey,            // Owner's public key
    share: Share,                // Encryption key material
    entry: Link,                 // CID pointing to encrypted bucket data
    pins: Link,                  // CID pointing to pin set
    previous: Option<Link>,      // Back-pointer to parent manifest (forms chain)
    version: Version,            // Version metadata
    timestamp: u64,              // Creation timestamp (for ordering)
}
```

Each manifest is stored as IPLD with DAG-CBOR encoding, yielding a deterministic CID.

### Bucket State
```rust
struct BucketState {
    bucket_id: Uuid,
    
    // Genesis manifest (immutable root)
    genesis: Cid,
    
    // All known manifests in the DAG
    manifests: HashMap<Cid, Manifest>,
    
    // Current canonical head (longest chain)
    canonical_head: Cid,
    
    // Length of canonical chain
    canonical_length: usize,
    
    // All known chain tips (for fork detection)
    known_tips: HashSet<Cid>,
}
```

## Trust Model and Verification

### Verification Logic

When receiving a new manifest, we verify:
1. Its `previous` pointer references a manifest we already have
2. The bucket ID matches
3. Basic structural validity

We trust that any manifest already in our DAG has been validated, so verification generalizes transitively.

```rust
async fn verify_and_add_manifest(
    &mut self,
    manifest_cid: Cid,
    manifest: Manifest,
) -> Result<()> {
    let bucket_id = manifest.id;
    
    // 1. Check bucket ID validity
    if manifest.id != bucket_id {
        return Err("Bucket ID mismatch");
    }
    
    // 2. Verify one-hop: previous must exist in our DAG (or be None for genesis)
    if let Some(prev_link) = manifest.previous {
        let bucket = self.buckets.get(&bucket_id)?;
        
        if !bucket.manifests.contains_key(&prev_link.cid) {
            // We don't have the parent - fetch it first
            return Err("Missing parent manifest - fetch required");
        }
        
        // Parent exists - we trust it's valid, so this manifest is valid
    } else {
        // No previous = claimed genesis
        let bucket = self.buckets.get(&bucket_id);
        if bucket.is_some() && bucket.unwrap().genesis != manifest_cid {
            return Err("Multiple genesis manifests claimed");
        }
    }
    
    // 3. Add to DAG
    self.add_manifest(manifest_cid, manifest).await?;
    
    Ok(())
}
```

### Fetching Missing Parents

If a manifest's parent is missing, fetch it before accepting:

```rust
async fn fetch_missing_history(
    &mut self,
    manifest_cid: Cid,
    manifest: Manifest,
) -> Result<()> {
    let mut to_fetch = vec![];
    let mut current = manifest.previous;
    
    // Walk backwards until we find something we have
    while let Some(prev_link) = current {
        if self.has_manifest(prev_link.cid) {
            break; // Found connection to our DAG!
        }
        
        to_fetch.push(prev_link.cid);
        
        // Fetch this manifest to continue walking
        let prev_manifest = self.fetch_manifest(prev_link.cid).await?;
        current = prev_manifest.previous;
    }
    
    // Fetch all missing manifests in order (oldest first)
    to_fetch.reverse();
    for cid in to_fetch {
        let m = self.fetch_manifest(cid).await?;
        self.verify_and_add_manifest(cid, m).await?;
    }
    
    // Now we can add the original manifest
    self.verify_and_add_manifest(manifest_cid, manifest).await?;
    
    Ok(())
}
```

## Protocol States

At any time, a bucket exists in one of these states:

### State 1: Single Chain (Normal Operation)
```
Genesis → M1 → M2 → M3 (canonical_head)
```
- **canonical_head** = M3
- **known_tips** = {M3}
- **canonical_length** = 3

### State 2: Fork Detected
```
Genesis → M1 → M2 → M3a (length 3)
                  ↘ M3b (length 3)
```
- **canonical_head** = M3a or M3b (deterministic tiebreak)
- **known_tips** = {M3a, M3b}
- **canonical_length** = 3

Fork resolution happens immediately upon detection.

### State 3: Fork Resolved
```
Genesis → M1 → M2 → M3a → M4a (length 4)
                  ↘ M3b (orphaned, length 3)
```
- **canonical_head** = M4a
- **known_tips** = {M4a, M3b}
- **canonical_length** = 4

Orphaned branches remain in DAG for auditability but are not canonical.

## Protocol Operations

### 1. Publishing a New Manifest

When a peer wants to update a bucket:

```rust
async fn publish_manifest(
    &mut self,
    bucket_id: Uuid,
    new_data: BucketData,
) -> Result<Cid> {
    // 1. Get current canonical head
    let bucket = self.buckets.get(&bucket_id)?;
    let parent = bucket.canonical_head;
    
    // 2. Encrypt and store new bucket data
    let encrypted = self.encrypt(new_data, &bucket.share)?;
    let entry_cid = self.store_blob(encrypted).await?;
    
    // 3. Create new manifest
    let manifest = Manifest {
        id: bucket_id,
        previous: Some(Link::from(parent)),
        entry: Link::from(entry_cid),
        timestamp: current_timestamp(),
        // ... other fields
    };
    
    // 4. Store manifest as IPLD (deterministic encoding)
    let manifest_cid = self.store_ipld_canonical(&manifest).await?;
    
    // 5. Update local state FIRST
    self.add_manifest(manifest_cid, manifest).await?;
    
    // 6. Broadcast via Plumtree
    self.plumtree.broadcast(ManifestAnnouncement {
        bucket_id,
        manifest_cid,
    }).await?;
    
    Ok(manifest_cid)
}
```

### 2. Receiving a Manifest Announcement

When a peer receives a manifest CID via gossip:

```rust
const FINALITY_DEPTH: usize = 10;  // Configurable safety parameter

async fn on_manifest_announcement(
    &mut self,
    bucket_id: Uuid,
    manifest_cid: Cid,
    from_peer: PeerId,
) -> Result<()> {
    // 1. Check if already known
    if self.has_manifest(manifest_cid) {
        return Ok(()); // Already processed
    }
    
    // 2. Fetch manifest data
    let manifest = self.fetch_manifest(manifest_cid).await?;
    
    // 3. Validate manifest bucket ID
    if manifest.id != bucket_id {
        return Err("Manifest bucket ID mismatch");
    }
    
    // 4. Verify one-hop chain link (fetch history if needed)
    if let Err(e) = self.verify_and_add_manifest(manifest_cid, manifest.clone()).await {
        if e.to_string().contains("Missing parent") {
            // Fetch missing history and retry
            self.fetch_missing_history(manifest_cid, manifest).await?;
        } else {
            return Err(e);
        }
    }
    
    Ok(())
}

async fn verify_and_add_manifest(
    &mut self,
    manifest_cid: Cid,
    manifest: Manifest,
) -> Result<()> {
    let bucket_id = manifest.id;
    
    // 1. Check bucket ID validity
    if manifest.id != bucket_id {
        return Err("Bucket ID mismatch");
    }
    
    // 2. Verify one-hop: previous must exist in our DAG (or be None for genesis)
    if let Some(prev_link) = manifest.previous {
        let bucket = self.buckets.get(&bucket_id)?;
        
        if !bucket.manifests.contains_key(&prev_link.cid) {
            // We don't have the parent - fetch it first
            return Err("Missing parent manifest - fetch required");
        }
        
        // 3. FINALITY CHECK: Reject if parent is too far behind canonical
        let parent_length = bucket.chain_lengths.get(&prev_link.cid)
            .ok_or("Parent length not cached")?;
        let canonical_length = bucket.canonical_length;
        
        if canonical_length > parent_length + FINALITY_DEPTH {
            return Err(format!(
                "Parent chain too far behind canonical ({} > {} + {}). \
                 Rejected for finality - cannot build on stale chain.",
                canonical_length,
                parent_length,
                FINALITY_DEPTH
            ));
        }
        
        // Parent exists and is within finality window - valid
    } else {
        // No previous = claimed genesis
        let bucket = self.buckets.get(&bucket_id);
        if bucket.is_some() && bucket.unwrap().genesis != manifest_cid {
            return Err("Multiple genesis manifests claimed");
        }
    }
    
    // 4. Add to DAG
    self.add_manifest(manifest_cid, manifest).await?;
    
    Ok(())
}
```

### 3. Adding Manifest to DAG

After validation, add manifest and update canonical head:

```rust
async fn add_manifest(
    &mut self,
    manifest_cid: Cid,
    manifest: Manifest,
) -> Result<()> {
    let bucket_id = manifest.id;
    let bucket = self.buckets.entry(bucket_id)
        .or_insert_with(|| BucketState::new(bucket_id, manifest_cid));
    
    // 1. Store manifest
    bucket.manifests.insert(manifest_cid, manifest.clone());
    
    // 2. Update known tips
    //    Remove parent from tips (no longer a tip)
    if let Some(parent_link) = manifest.previous {
        bucket.known_tips.remove(&parent_link.cid);
    }
    //    Add this manifest as new tip
    bucket.known_tips.insert(manifest_cid);
    
    // 3. Recompute canonical head
    self.select_canonical_head(bucket_id).await?;
    
    Ok(())
}
```

### 4. Canonical Head Selection

The core of the protocol - longest chain wins:

```rust
async fn select_canonical_head(
    &mut self,
    bucket_id: Uuid,
) -> Result<()> {
    let bucket = self.buckets.get_mut(&bucket_id)?;
    
    let mut best_tip = None;
    let mut best_length = 0;
    
    // Evaluate all known tips
    for tip_cid in &bucket.known_tips {
        let length = self.compute_chain_length(*tip_cid, bucket)?;
        
        // Longest chain wins
        // On tie: lexicographically greater CID wins (deterministic)
        let is_better = length > best_length ||
            (length == best_length && 
             tip_cid > best_tip.as_ref().unwrap());
        
        if is_better {
            best_tip = Some(*tip_cid);
            best_length = length;
        }
    }
    
    let new_head = best_tip.ok_or("No valid tips")?;
    
    // Check if canonical head changed (fork resolution)
    if bucket.canonical_head != new_head {
        log::info!(
            "Canonical head changed: {} -> {} (length {})",
            bucket.canonical_head,
            new_head,
            best_length
        );
        
        bucket.canonical_head = new_head;
        bucket.canonical_length = best_length;
    }
    
    // Prune stale tips beyond finality depth
    self.prune_stale_tips(bucket_id);
    
    Ok(())
}

fn compute_chain_length(
    &self,
    head: Cid,
    bucket: &BucketState,
) -> Result<usize> {
    let mut length = 0;
    let mut current = Some(head);
    
    while let Some(cid) = current {
        if cid == bucket.genesis {
            return Ok(length);
        }
        
        length += 1;
        let manifest = bucket.manifests.get(&cid)
            .ok_or("Manifest not found")?;
        current = manifest.previous.map(|l| l.cid);
    }
    
    Ok(length) // Reached end of known chain
}

fn prune_stale_tips(&mut self, bucket_id: Uuid) {
    let bucket = self.buckets.get_mut(&bucket_id).unwrap();
    let canonical_length = bucket.canonical_length;
    
    bucket.known_tips.retain(|tip_cid| {
        let tip_length = bucket.chain_lengths.get(tip_cid)
            .copied()
            .unwrap_or(0);
        
        // Keep if within finality window
        let keep = canonical_length <= tip_length + FINALITY_DEPTH;
        
        if !keep {
            log::info!(
                "Pruning stale tip {} (length {}, {} blocks behind canonical)",
                tip_cid,
                tip_length,
                canonical_length - tip_length
            );
        }
        
        keep
    });
}
```

## Finality and Chain Pruning

### Finality Depth

The protocol uses a **finality depth** parameter to achieve eventual convergence to a single chain tip.

```rust
const FINALITY_DEPTH: usize = 10;  // Recommended: 10-20 blocks
```

**Finality rule**: Once the canonical chain is more than `FINALITY_DEPTH` blocks ahead of an orphaned chain, that orphaned chain is considered "finalized" as a loser. The protocol will:
1. Reject new manifests building on the stale chain
2. Prune the stale tip from `known_tips`

### Why Finality Matters

Without finality, orphaned chains could be revived indefinitely:

```
Bad: No finality
  Canonical at M100
  Someone builds on orphaned M5 → creates M6
  Chain M6 could theoretically grow to M101 and overtake canonical
  
  Result: Canonical head thrashes forever
```

With finality:

```
Good: With finality (depth 10)
  Canonical at M100
  Someone tries to build on orphaned M5
  
  Check: 100 > 5 + 10? YES
  Reject: "Parent chain is 95 blocks behind, beyond finality depth"
  
  Result: Orphaned chains die after falling too far behind
```

### Timeline: Fork to Single Head

```
t=0: Normal operation
  Genesis → M1 → M2 → M3 → M4 → M5
  known_tips = {M5}
  Single tip ✓

t=6: Fork occurs (concurrent updates)
  M5 → M6a (Peer A)
     ↘ M6b (Peer B)
  known_tips = {M6a, M6b}
  canonical = M6b (wins tiebreak)

t=7-15: Canonical chain grows
  M6b → M7 → M8 → ... → M15
  known_tips = {M6a, M15}
  M6a is orphaned (length 6)
  canonical = M15 (length 15)

t=16: Finality threshold reached
  canonical_length = 16
  M6a length = 6
  16 > 6 + 10? YES
  
  Action: Prune M6a
  known_tips = {M15}
  
  Back to single tip! ✓

```

### Handling Finality Rejections

When a manifest is rejected for finality:

```rust
impl PlumtreeHandlers for BucketSync {
    async fn on_ihave(&mut self, msg: Ihave, from: PeerId) {
        match self.on_manifest_announcement(msg.bucket_id, msg.manifest_cid, from).await {
            Ok(_) => {
                // Accepted - forward to eager peers
                self.plumtree.forward_ihave(msg, from).await;
            }
            Err(e) if e.to_string().contains("finality") => {
                // Rejected for finality - send prune, don't forward
                self.send_prune(from, msg).await;
                
                log::warn!(
                    "Rejected stale manifest {} from peer {}: {}",
                    msg.manifest_cid,
                    from,
                    e
                );
            }
            Err(e) if e.to_string().contains("Missing parent") => {
                // Missing parent - request it via graft
                self.send_graft(from, msg).await;
            }
            Err(e) => {
                // Other errors
                log::error!("Error processing manifest: {}", e);
            }
        }
    }
    
    // ... other handlers
}
```

### Choosing Finality Depth

The finality depth parameter trades off different concerns:

| Finality Depth | Partition Tolerance | Pruning Speed | Memory Usage | Risk |
|----------------|--------------------|--------------|--------------| -----|
| 5 blocks | Low | Fast | Low | May reject valid recovery |
| 10 blocks | Medium | Medium | Medium | **Recommended balance** |
| 20 blocks | High | Slow | Medium | Longer fork windows |
| 50 blocks | Very High | Very Slow | High | Orphaned tips linger |
| 100 blocks | Extreme | Extremely Slow | Very High | Unnecessary overhead |


**Factors to consider:**
- **Network latency**: Higher latency networks need deeper finality to tolerate message delays
- **Update frequency**: More frequent updates need shallower finality to prune faster
- **Partition risk**: Networks with common partitions need deeper finality
- **Memory constraints**: Constrained devices benefit from shallower finality

### Network Partition Scenario

Finality depth helps handle network partitions gracefully:

```
Before partition:
  Genesis → M1 → M2 → M3
  All peers at M3

Network splits at t=4:

Partition A (10 peers):
  M3 → M4a → M5a → ... → M20a (grows to length 20)

Partition B (5 peers):
  M3 → M4b → M5b → ... → M15b (grows to length 15)

Partition heals at t=21:

With FINALITY_DEPTH = 10:
  
  Partition A peers (canonical = M20a, length 20):
    Receive M15b from Partition B
    Check: 20 > 15 + 10? NO (20 ≤ 25)
    ACCEPT M15b ✓
    known_tips = {M20a, M15b}
    canonical = M20a (longer)
  
  Partition B peers (canonical = M15b, length 15):
    Receive M20a from Partition A
    Check: 15 > 20 + 10? NO (obviously)
    ACCEPT M20a ✓
    known_tips = {M15b, M20a}
    canonical = M20a (longer)
  
  All peers converge to M20a ✓
  M15b is orphaned

After 10 more blocks:
  Canonical at M30a (length 30)
  30 > 15 + 10? YES
  M15b pruned from all peers
  known_tips = {M30a}
  
  Single tip restored ✓
```

If finality depth was too shallow (e.g., 5), Partition B's chain would be rejected immediately upon healing, potentially losing work.

### Properties Guaranteed by Finality

1. **Eventual Single Tip**: After finality depth blocks, orphaned branches are pruned
2. **Fork Lifespan Bounded**: Forks can only persist for finality depth blocks
3. **Deterministic Pruning**: All peers prune the same tips at the same relative time
4. **Memory Bounded**: `known_tips` size is bounded by number of concurrent updates within finality window
```

## Fork Detection and Resolution

### Fork Detection

A fork exists when `known_tips.len() > 1`:

```rust
fn is_forked(&self, bucket_id: Uuid) -> bool {
    self.buckets
        .get(&bucket_id)
        .map(|b| b.known_tips.len() > 1)
        .unwrap_or(false)
}
```

### Automatic Fork Resolution

Forks resolve automatically through the canonical head selection algorithm:

**Scenario 1: Concurrent updates at same height**
```
Genesis → M1 → M2 → M3a (timestamp: 100)
                  ↘ M3b (timestamp: 101)
```
- Both have length 3
- Deterministic tiebreak: `max(CID(M3a), CID(M3b))`
- Winner becomes canonical immediately

**Scenario 2: One chain extends**
```
Genesis → M1 → M2 → M3a → M4a (length 4)
                  ↘ M3b       (length 3)
```
- M4a has length 4 > 3
- M4a becomes canonical
- M3b remains in `known_tips` but is not canonical

**Scenario 3: Both chains extend**
```
Genesis → M1 → M2 → M3a → M4a (length 4)
                  ↘ M3b → M4b (length 4)
```
- Both have length 4
- Deterministic tiebreak: `max(CID(M4a), CID(M4b))`
- Loser remains tracked but not canonical

## Tracking Multiple Chains

**Yes, we track all known chain tips**, but only **ONE** is canonical at any time.

### What We Track

```rust
struct BucketState {
    // The ONE canonical chain head
    canonical_head: Cid,
    canonical_length: usize,
    
    // ALL known chain tips (may be multiple during forks)
    known_tips: HashSet<Cid>,
    
    // Full DAG of ALL manifests (including orphaned branches)
    manifests: HashMap<Cid, Manifest>,
}
```

### Why Track Multiple Chains?

1. **Fork detection**: Need to know when multiple tips exist
2. **Future resolution**: A currently-losing chain might grow longer
3. **Auditability**: Orphaned branches show historical conflicts
4. **Race conditions**: Brief windows where multiple announcements arrive

### Example Timeline

```
t=0:  Genesis → M1 → M2 (canonical)
      known_tips = {M2}
      
t=1:  Peer A announces M3a
      Genesis → M1 → M2 → M3a (canonical)
      known_tips = {M3a}
      
t=2:  Peer B announces M3b (concurrent with M3a)
      Genesis → M1 → M2 → M3a (canonical, wins tiebreak)
                      ↘ M3b (orphaned)
      known_tips = {M3a, M3b}  ← Multiple chains tracked!
      
t=3:  Peer B announces M4b
      Genesis → M1 → M2 → M3a (orphaned)
                      ↘ M3b → M4b (canonical, longer)
      known_tips = {M3a, M4b}  ← Still tracking both!
      canonical switches from M3a to M4b
      
t=4:  Peer A announces M4a
      Genesis → M1 → M2 → M3a → M4a (orphaned)
                      ↘ M3b → M4b (canonical, wins tiebreak)
      known_tips = {M4a, M4b}
      Both length 4, M4b wins tiebreak
      
t=5-13: Everyone builds on M4b (canonical)
      M4b → M5 → M6 → ... → M13
      known_tips = {M4a, M13}
      M4a orphaned at length 4
      canonical = M13 at length 13
      
t=14: Finality threshold reached
      canonical_length = 14
      M4a length = 4
      14 > 4 + FINALITY_DEPTH(10)? YES
      
      Prune M4a from known_tips
      known_tips = {M13}
      
      Back to single tip! ✓
      
Future: If someone tries to build on M4a
      New manifest M5a with parent = M4a
      Check: 14 > 4 + 10? YES
      REJECTED - parent beyond finality depth
```

### Pruning Old Tips (Optional Optimization)

You MAY periodically prune `known_tips` to remove tips that are too far behind:

```rust
fn prune_stale_tips(&mut self, bucket_id: Uuid) {
    let bucket = self.buckets.get_mut(&bucket_id).unwrap();
    let canonical_length = bucket.canonical_length;
    
    const MAX_FORK_DEPTH: usize = 10;
    
    bucket.known_tips.retain(|tip| {
        let length = self.compute_chain_length(*tip, bucket).unwrap_or(0);
        canonical_length - length <= MAX_FORK_DEPTH
    });
}
```

This keeps `known_tips` small while still detecting recent forks.

## Properties and Guarantees

### Safety Properties

1. **Chain Integrity**: Verification generalizes transitively
2. **Deterministic Canonical**: All peers select same canonical head (eventually)
3. **Fork Detection**: Concurrent updates are always detected
4. **No Data Loss**: All manifests stored, even orphaned ones
5. **Bounded Fork Lifespan**: Forks can only persist for finality depth blocks
6. **Eventual Single Tip**: After finality depth, system returns to single canonical tip

### Liveness Properties

1. **Eventually Consistent**: All peers converge to same canonical head
2. **Progress**: Longest chain always grows
3. **Fork Resolution**: Forks resolve within one gossip round after any chain extends

### Failure Modes

1. **Network Partition**: Each partition has own canonical head; rejoins when partition heals
2. **Peer Offline**: Other peers continue; offline peer catches up on reconnect
3. **Concurrent Writes**: Both chains tracked; longest wins immediately
4. **Missing History**: Automatically fetch missing parent manifests

## Fork Resolution Timing

### Should We Wait Before Selecting Canonical Head?

**Short answer: No grace period needed. Resolve forks immediately.**

The protocol should select the canonical head **immediately** whenever the DAG changes, for these reasons:

#### 1. Determinism Requires Immediate Resolution

If we wait, we introduce non-determinism:

```
Bad: Grace period approach
    t=0: Fork detected {M3a, M3b}
    t=1: Wait 5 seconds for more manifests...
    t=6: Select canonical head
    
    Problem: Different peers might see different manifests during grace period!
    
    Peer A sees at t=6: {M3a, M3b}
        → Selects M3b (wins tiebreak)
    
    Peer B sees at t=6: {M3a, M3b, M4a}  (M4a arrived at t=5)
        → Selects M4a (longer)
    
    Result: Peers disagree even with same algorithm!
```

With immediate resolution, peers always compute based on their current knowledge, which eventually converges.

#### 2. Reads Need Consistency

Users reading from the bucket need a consistent view:

```
Without immediate resolution:
    t=0: Fork {M3a, M3b}, wait...
    t=1: User reads key "foo"
         → Which chain do we read from? Both? Undefined!
    
With immediate resolution:
    t=0: Fork {M3a, M3b}, canonical = M3b (tiebreak)
    t=1: User reads key "foo"
         → Read from M3b ✓
    t=2: M4a arrives, canonical switches to M4a
    t=3: User reads key "foo"
         → Read from M4a ✓
```

The canonical head might change, but it's always well-defined.

#### 3. Grace Period Doesn't Help Convergence

Waiting doesn't improve eventual consistency:

```
Scenario: Two peers with concurrent updates

Immediate resolution:
    t=0: Peer A announces M3a
    t=1: Peer B announces M3b
    t=2: Both peers have {M3a, M3b}, select M3b
    Result: Converged at t=2 ✓

With 10s grace period:
    t=0: Peer A announces M3a
    t=1: Peer B announces M3b
    t=11: Peer A grace period expires, selects from {M3a, M3b}
    t=12: Peer B grace period expires, selects from {M3a, M3b}
    Result: Converged at t=12 ✗ (worse!)
```

Grace periods only delay convergence without improving outcomes.

#### 4. Longest Chain Grows Naturally

The beauty of longest-chain is that forks **self-resolve** as chains grow:

```
t=0: Fork {M3a, M3b}, canonical = M3b (tiebreak)
t=1: Someone builds on M3a → M4a
     canonical switches to M4a (longer)
t=2: Someone builds on M4a → M5a
     canonical stays M4a→M5a (longer)

Fork resolves organically through chain growth, no waiting needed.
```

### Recommended Implementation: Immediate Resolution

```rust
async fn add_manifest(&mut self, cid: Cid, manifest: Manifest) -> Result<()> {
    let bucket_id = manifest.id;
    let bucket = self.buckets.get_mut(&bucket_id)?;
    
    // 1. Add to DAG
    bucket.manifests.insert(cid, manifest.clone());
    
    // 2. Update tips
    if let Some(prev) = manifest.previous {
        bucket.known_tips.remove(&prev.cid);
    }
    bucket.known_tips.insert(cid);
    
    // 3. IMMEDIATELY recompute canonical head
    //    No grace period, no waiting
    self.select_canonical_head(bucket_id).await?;
    
    Ok(())
}
```

Every time a manifest is added, the canonical head is recomputed immediately.

### What About Rapid Chain Switching?

You might worry about canonical head "thrashing" during a fork:

```
t=0: M3a arrives → canonical = M3a
t=1: M3b arrives → canonical = M3b (wins tiebreak)
t=2: M4a arrives → canonical = M4a (longer)
t=3: M4b arrives → canonical = M4b (wins tiebreak)
```

**This is actually fine and expected behavior:**

1. **Each switch is deterministic** - all peers switch at the same relative time
2. **Reads are consistent** - each read uses the current canonical head
3. **Writes build on canonical** - naturally extends the winning chain
4. **Fork resolves quickly** - one chain will pull ahead

### Should Writes Wait During Forks?

Another question: should peers refuse to write while a fork exists?

**Answer: No, write immediately on current canonical head.**

```rust
async fn write_to_bucket(&mut self, bucket_id: Uuid, data: Data) -> Result<Cid> {
    let bucket = self.buckets.get(&bucket_id)?;
    
    // Check for fork
    if bucket.known_tips.len() > 1 {
        // Option A: Wait for fork to resolve
        //   ❌ Could wait forever!
        //   ❌ Degrades to synchronous operation
        
        // Option B: Write on canonical head anyway
        //   ✓ Always makes progress
        //   ✓ Might end up on losing chain (acceptable)
    }
    
    // Build on current canonical head
    let parent = bucket.canonical_head;
    let manifest = self.create_manifest(bucket_id, parent, data);
    
    self.publish_manifest(manifest).await
}
```

**Writing during a fork is safe:**
- Your write extends one of the competing chains
- If your chain wins (longest), your write is canonical
- If your chain loses, your write is orphaned (but preserved in DAG)
- You can always re-apply orphaned writes if needed

### Detecting Fork Duration

For observability, you might want to track how long forks last:

```rust
struct BucketState {
    // ... existing fields
    
    fork_detected_at: Option<Instant>,
}

async fn select_canonical_head(&mut self, bucket_id: Uuid) -> Result<()> {
    let bucket = self.buckets.get_mut(&bucket_id)?;
    
    let is_forked = bucket.known_tips.len() > 1;
    
    // Track fork duration
    match (is_forked, bucket.fork_detected_at) {
        (true, None) => {
            // Fork just started
            bucket.fork_detected_at = Some(Instant::now());
            log::warn!("Fork detected on bucket {}", bucket_id);
        }
        (false, Some(started)) => {
            // Fork resolved
            let duration = started.elapsed();
            bucket.fork_detected_at = None;
            log::info!("Fork resolved after {:?}", duration);
        }
        _ => {}
    }
    
    // ... rest of selection logic
}
```

This helps monitor network health:
- Short forks (< 1s): Normal, expected during concurrent updates
- Long forks (> 10s): Might indicate network partition or issue

### Exception: User Confirmation for Writes

In high-stakes scenarios, you might want users to confirm writes during forks:

```rust
async fn write_to_bucket_with_confirmation(
    &mut self,
    bucket_id: Uuid,
    data: Data,
) -> Result<Cid> {
    let bucket = self.buckets.get(&bucket_id)?;
    
    // Check for fork
    if bucket.known_tips.len() > 1 {
        // Warn user
        let confirmed = self.prompt_user(
            "Fork detected. Your write might be on a losing chain. Continue?"
        ).await?;
        
        if !confirmed {
            return Err("Write cancelled by user");
        }
    }
    
    // Proceed with write
    self.write_to_bucket(bucket_id, data).await
}
```

But this is an application-level policy, not protocol-level.

## Summary: Fork Resolution Timing

### Recommendation: Immediate Resolution

**Always select canonical head immediately when DAG changes.**

**Reasons:**
1. ✅ Deterministic - all peers compute from same state
2. ✅ Consistent reads - always well-defined canonical head
3. ✅ Fast convergence - no artificial delays
4. ✅ Simple - no grace period state machine
5. ✅ Natural resolution - chains grow and win organically

**Grace periods:**
- ❌ Don't improve convergence
- ❌ Make reads ambiguous
- ❌ Introduce non-determinism
- ❌ Delay consensus

**Optional optimizations:**
- ✓ Batch recomputation (100ms) for CPU efficiency
- ✓ Track fork duration for monitoring
- ✓ User confirmation for writes (application policy)

**The key insight:** Longest-chain consensus doesn't need coordination or waiting. Each peer independently computes the canonical head from its current knowledge, and all peers eventually converge as information propagates through gossip.

## Peer Reliability and Catch-Up

### Peer Goes Offline and Returns

When a peer goes offline and comes back after multiple updates, the protocol handles catch-up automatically.

#### Scenario: Peer Misses Multiple Updates

```
Initial state (peer online):
    Genesis → M1 → M2 (canonical)
    Peer has: {Genesis, M1, M2}
    Peer canonical_head: M2

Peer goes offline...

While offline, network progresses:
    Genesis → M1 → M2 → M3 → M4 → M5
    Network canonical_head: M5

Peer comes back online:
    Peer still thinks: canonical_head = M2
    Peer knows nothing about: {M3, M4, M5}
```

### Catch-Up Process

The peer doesn't need to manually "resolve" anything - the protocol handles it automatically:

```rust
async fn peer_comes_online(&mut self, bucket_id: Uuid) {
    // 1. Reconnect to network
    self.plumtree.rejoin().await;
    
    // 2. Receive gossip about new heads
    // Other peers will send announcements for their current heads
    
    // 3. Process announcements
    // This triggers automatic history fetch
}

// When peer receives announcement for M5
async fn on_manifest_announcement(
    &mut self,
    manifest_cid: Cid, // M5
    manifest: Manifest,
) -> Result<()> {
    // Check: do we have the parent (M4)?
    if let Some(prev) = manifest.previous {
        if !self.has_manifest(prev.cid) {
            // Missing parent - fetch history
            self.fetch_missing_history(manifest_cid, manifest).await?;
            return Ok(());
        }
    }
    
    // We have parent - add normally
    self.add_manifest(manifest_cid, manifest).await
}

async fn fetch_missing_history(
    &mut self,
    head: Cid, // M5
    head_manifest: Manifest,
) -> Result<()> {
    let mut to_fetch = vec![];
    let mut current_manifest = head_manifest;
    let mut current_cid = head;
    
    // Walk backwards until we find something we have
    loop {
        if let Some(prev_link) = current_manifest.previous {
            if self.has_manifest(prev_link.cid) {
                // Found connection! We have M2
                break;
            }
            
            // Need to fetch this parent
            to_fetch.push((current_cid, current_manifest));
            
            // Fetch parent and continue walking
            current_cid = prev_link.cid;
            current_manifest = self.fetch_manifest(current_cid).await?;
        } else {
            // Reached genesis
            to_fetch.push((current_cid, current_manifest));
            break;
        }
    }
    
    // Now add all manifests in order (oldest first)
    to_fetch.reverse();
    for (cid, manifest) in to_fetch {
        self.add_manifest(cid, manifest).await?;
    }
    
    Ok(())
}
```

**Result after catch-up:**
```
Peer now has: {Genesis, M1, M2, M3, M4, M5}
Peer canonical_head: M5 (longest chain automatically selected)
```

### Do They Need to Resolve Forks?

**No manual fork resolution needed.** The longest-chain selection happens automatically:

```rust
async fn add_manifest(&mut self, cid: Cid, manifest: Manifest) -> Result<()> {
    let bucket_id = manifest.id;
    let bucket = self.buckets.get_mut(&bucket_id)?;
    
    // 1. Add to DAG
    bucket.manifests.insert(cid, manifest.clone());
    
    // 2. Update tips
    if let Some(prev) = manifest.previous {
        bucket.known_tips.remove(&prev.cid);
    }
    bucket.known_tips.insert(cid);
    
    // 3. Automatically recompute canonical head
    self.select_canonical_head(bucket_id).await?;
    
    Ok(())
}
```

After each manifest is added, `select_canonical_head` runs and picks the longest chain. The peer doesn't need to do anything special.

### Peer Returns During a Fork

More complex scenario: peer comes back while network has multiple competing chains:

```
Peer goes offline at M2...

While offline, concurrent updates create fork:
    Genesis → M1 → M2 → M3a → M4a (length 4)
                      ↘ M3b → M4b (length 4)
    
Network state:
    known_tips = {M4a, M4b}
    canonical_head = M4b (won tiebreak)

Peer comes back online:
    Peer still has: canonical_head = M2
```

**Catch-up process:**

```
Step 1: Peer connects to Peer A
    Peer A announces: M4a
    
    Peer receives M4a announcement
    → Missing parent M3a
    → Fetch M3a
    → Missing parent M2
    → Already have M2! ✓
    
    Peer adds: M3a, then M4a
    Peer state:
        known_tips = {M4a}
        canonical_head = M4a (length 4)

Step 2: Peer connects to Peer B
    Peer B announces: M4b
    
    Peer receives M4b announcement
    → Missing parent M3b
    → Fetch M3b
    → Missing parent M2
    → Already have M2! ✓
    
    Peer adds: M3b, then M4b
    Peer state:
        known_tips = {M4a, M4b}
        canonical_head = M4b (length 4, wins tiebreak)

Final state:
    Peer now agrees with network consensus
```

**Key point**: The peer doesn't need to "know" there was a fork. It just:
1. Fetches manifests it's missing
2. Adds them to its DAG
3. Lets `select_canonical_head` pick the winner

The algorithm handles fork resolution automatically.

### Optimization: Announce Current Head on Connect

When a peer reconnects, it can speed up catch-up by explicitly asking for the current head:

```rust
async fn rejoin_network(&mut self, bucket_id: Uuid) -> Result<()> {
    // 1. Reconnect to Plumtree
    self.plumtree.rejoin().await;
    
    // 2. Query peers for their current canonical head
    let connected_peers = self.plumtree.get_peers();
    
    for peer in connected_peers {
        // Ask: "What's your canonical head for this bucket?"
        let their_head = self.request_canonical_head(peer, bucket_id).await?;
        
        // Process it (will trigger history fetch if needed)
        self.on_manifest_announcement(bucket_id, their_head, peer).await?;
    }
    
    Ok(())
}
```

This is more efficient than waiting for natural gossip.

### Peer with Stale Fork

A peer might have been working on a chain that ended up losing:

```
Network progresses to M5:
    Genesis → M1 → M2 → M3 → M4 → M5 (canonical)

Peer goes offline at M2, creates local updates:
    Genesis → M1 → M2 → M3' → M4' (peer's local chain)
    
Peer comes back online:
    Peer has: M4' (length 4)
    Network has: M5 (length 5)
```

**What happens:**

```rust
Step 1: Peer receives announcement for M5
    Fetches M5, M4, M3 (missing history)
    Adds them to DAG
    
    Peer state:
        known_tips = {M4', M5}
        manifests = {Genesis, M1, M2, M3, M4, M5, M3', M4'}

Step 2: select_canonical_head runs
    Compute length(M4') = 4
    Compute length(M5) = 5
    
    M5 wins (longer chain)
    
    Peer state:
        canonical_head = M5
        known_tips = {M4', M5}

Result:
    Peer's local work (M3', M4') is orphaned
    Peer now reads/writes from M5
    M3' and M4' remain in DAG for auditability
```

**The peer's local work is not lost** - it's still in the DAG. But it's not canonical.

### Recovering Orphaned Work

If a peer wants to preserve work from an orphaned chain:

```rust
async fn recover_orphaned_changes(
    &self,
    orphaned_head: Cid,
    canonical_head: Cid,
) -> Result<Vec<(String, Vec<u8>)>> {
    // 1. Read data from orphaned chain
    let orphaned_data = self.read_all_from_chain(orphaned_head).await?;
    
    // 2. Read data from canonical chain
    let canonical_data = self.read_all_from_chain(canonical_head).await?;
    
    // 3. Find differences
    let mut conflicts = vec![];
    for (key, orphaned_value) in orphaned_data {
        if let Some(canonical_value) = canonical_data.get(&key) {
            if orphaned_value != *canonical_value {
                conflicts.push((key, orphaned_value));
            }
        }
    }
    
    Ok(conflicts)
}

// User can then manually re-apply these changes
async fn reapply_changes(&mut self, changes: Vec<(String, Vec<u8>)>) {
    for (key, value) in changes {
        self.update_bucket(key, value).await;
    }
}
```

This creates new manifests on top of the canonical chain with the orphaned data.

### Network Partition Heal

Special case: network splits into two partitions, each progresses independently:

```
Before partition:
    Genesis → M1 → M2

Partition A:
    M2 → M3a → M4a → M5a (length 5)

Partition B:
    M2 → M3b → M4b (length 4)

Partition heals:
    All peers in both partitions now gossip with each other
```

**Healing process:**

```
Peers in Partition B:
    Receive announcement for M5a
    Fetch M5a, M4a, M3a (missing history)
    
    New state:
        known_tips = {M4b, M5a}
        Compute: length(M5a) = 5 > length(M4b) = 4
        canonical_head = M5a

Peers in Partition A:
    Receive announcement for M4b
    Fetch M4b, M3b (missing history)
    
    New state:
        known_tips = {M5a, M4b}
        Compute: length(M5a) = 5 > length(M4b) = 4
        canonical_head = M5a

Result:
    ALL peers converge to M5a automatically
    No manual intervention needed
```

### Reliability Guarantees

**The protocol provides:**

1. **Automatic Catch-Up**: Peers fetch missing history on reconnect
2. **No Manual Resolution**: Longest-chain rule handles everything
3. **Work Preservation**: Orphaned chains stay in DAG
4. **Deterministic Convergence**: All peers reach same canonical head
5. **Partition Tolerance**: Network splits heal automatically

**Peer responsibilities:**

1. ✅ Fetch missing manifests when announced
2. ✅ Add manifests to DAG
3. ✅ Run `select_canonical_head` after each addition

**Peer does NOT need to:**

- ❌ Detect forks manually
- ❌ Choose which chain to follow
- ❌ Coordinate with other peers
- ❌ Implement consensus algorithm

The protocol handles all fork resolution automatically through longest-chain selection.

## Hard Forks and Permanent Divergence

### What is a Hard Fork?

A **hard fork** occurs when peers **permanently disagree** on the canonical chain, even after full information propagation. This is fundamentally different from a temporary fork.

### Can the Network Hard Fork?

**Short answer: No, not under normal operation. The protocol is designed to prevent hard forks.**

The longest-chain rule with deterministic tiebreaking means:
- Given the same set of manifests
- All peers will select the same canonical head
- Disagreement only occurs during gossip propagation (temporary)

### Conditions That Could Cause a Hard Fork

A hard fork would require one of these scenarios:

#### 1. Network Partition (Soft Fork - Self-Healing)

```
Initial state:
    Genesis → M1 → M2 (all peers agree)

Network partition occurs:
    Partition A: M2 → M3a → M4a
    Partition B: M2 → M3b → M4b

During partition:
    - Peers in A see canonical = M4a
    - Peers in B see canonical = M4b
    - This is a TEMPORARY fork

Partition heals:
    All peers now see both chains:
    M2 → M3a → M4a (length 4)
       ↘ M3b → M4b (length 4)
    
    Deterministic tiebreak: max(CID(M4a), CID(M4b))
    ALL peers converge to same winner
```

**Result: NOT a hard fork** - self-heals via deterministic tiebreak

#### 2. Byzantine/Malicious Peers (Protocol Violation)

A malicious peer could:
- Create invalid manifests (wrong bucket ID, malformed structure)
- Claim false genesis
- Create manifests with fake `previous` pointers

```
Honest chain:  Genesis → M1 → M2
Malicious:     FakeGenesis → M1' → M2'
```

**Prevention**: 
- One-hop verification rejects manifests with unknown parents
- Malicious peer can't inject fake history into honest peers' DAGs
- At worst, malicious peer talks to itself

**Result: NOT a hard fork** - honest peers reject invalid manifests

#### 3. Different Genesis (ACTUAL Hard Fork)

If peers initialize with **different genesis manifests**:

```
Peer Group A: GenesisA → M1a → M2a
Peer Group B: GenesisB → M1b → M2b
```

These are **incompatible chains** - they can never merge.

**This requires:**
- Different bucket initialization
- Manual misconfiguration
- Intentional fork by bucket creator

**Result: PERMANENT hard fork** - two incompatible buckets exist

#### 4. Protocol Version Incompatibility

If peers run different protocol versions with incompatible rules:

```
Old peers: Use timestamp for tiebreak
New peers: Use CID for tiebreak
```

With same-length competing chains, old and new peers diverge permanently.

**Prevention**: 
- Protocol version in manifest
- Reject manifests from incompatible versions
- Coordinated upgrades

**Result: Could cause hard fork** - requires governance

### Would Peers Have to "Decide" on a Chain?

**No - the algorithm decides deterministically.**

Given the same information, all peers select the same canonical head:
1. Calculate length of each tip
2. Pick longest
3. On tie, pick lexicographically greatest CID

No human decision, no voting, no consensus mechanism needed.

### Example: Can Two Peers Permanently Disagree?

```
Scenario:
    Genesis → M1 → M2 → M3a (length 3, CID = 0xAAA...)
                      ↘ M3b (length 3, CID = 0xBBB...)

Peer Alice knows about both:
    - Computes: 0xBBB... > 0xAAA...
    - Canonical = M3b

Peer Bob knows about both:
    - Computes: 0xBBB... > 0xAAA...
    - Canonical = M3b

Both agree!
```

**The only way they disagree:**
- Alice doesn't know about M3b yet (temporary)
- Bob doesn't know about M3a yet (temporary)

Once gossip completes, they converge.

### Explicit Chain Selection (Manual Fork)

If you wanted peers to manually choose a chain, you'd need to:

**Add a "pin" mechanism:**
```rust
struct BucketState {
    // ... existing fields
    
    // Manual override: pin to specific chain
    pinned_head: Option<Cid>,
}

fn select_canonical_head(&mut self) {
    // Check for manual pin first
    if let Some(pinned) = self.pinned_head {
        if self.known_tips.contains(&pinned) {
            self.canonical_head = pinned;
            return; // Ignore longest-chain rule
        }
    }
    
    // Otherwise use longest-chain rule
    // ...
}
```

**This creates intentional hard fork:**
- Peers with `pinned_head = M3a` ignore M3b
- Peers with `pinned_head = M3b` ignore M3a
- They permanently diverge

### Hard Fork Detection

Peers can detect if they're on different chains:

```rust
struct PeerChainInfo {
    peer_id: PeerId,
    canonical_head: Cid,
    canonical_length: usize,
}

fn detect_hard_fork(&self, other_peer_info: PeerChainInfo) -> bool {
    // Same length but different head = potential hard fork
    if other_peer_info.canonical_length == self.canonical_length &&
       other_peer_info.canonical_head != self.canonical_head {
        
        // Check if we know about their head
        if !self.has_manifest(other_peer_info.canonical_head) {
            return false; // Just missing info, not a fork
        }
        
        // We have their head but still disagree = BUG or manual pin
        return true;
    }
    
    false
}
```

### Fork Recovery Mechanisms

If peers detect permanent disagreement:

#### Option 1: Automatic - Trust Majority
```rust
fn resolve_via_majority(&mut self, bucket_id: Uuid) {
    // Poll connected peers for their canonical head
    let peer_votes: HashMap<Cid, usize> = self.poll_peers(bucket_id).await;
    
    // Switch to majority chain
    let majority_head = peer_votes.iter()
        .max_by_key(|(_, count)| *count)
        .map(|(cid, _)| *cid);
    
    if let Some(head) = majority_head {
        self.pinned_head = Some(head);
    }
}
```

#### Option 2: Manual - User Intervention
```rust
// User explicitly chooses which chain to follow
async fn force_chain(&mut self, bucket_id: Uuid, head: Cid) {
    self.buckets.get_mut(&bucket_id).unwrap().pinned_head = Some(head);
}
```

#### Option 3: Reject - Require Reconciliation
```rust
// Stop accepting new manifests until fork is resolved
if self.detect_hard_fork(...) {
    return Err("Hard fork detected - manual intervention required");
}
```

## Summary

### Key Design Decisions

1. **One-Hop Verification**: Trust transitivity - only verify the immediate parent exists
2. **Longest Chain Wins**: Simple, deterministic conflict resolution
3. **Track All Tips**: Detect forks but maintain single canonical head
4. **Eventual Consistency**: Temporary disagreement is acceptable
5. **Plumtree Gossip**: Efficient propagation with tree + lazy gossip backup

### What We Track

- **One canonical chain** for reads/writes
- **Multiple tips** during forks (for detection and future resolution)
- **Full DAG** of all manifests (for auditability)

### When Multiple Chains Exist

- During concurrent updates (fork state)
- As orphaned branches (historical record)
- Briefly during gossip propagation

But only **one chain is ever canonical** for reading bucket state.

### Hard Fork Resistance

**The protocol is designed to prevent hard forks:**
- Deterministic canonical head selection
- No manual chain choice required
- Automatic convergence after information propagation

**Hard forks only occur from:**
- Different genesis (incompatible buckets)
- Protocol version mismatch (requires governance)
- Manual pinning (intentional divergence)

**NOT from:**
- Network partitions (self-healing)
- Concurrent updates (deterministic tiebreak)
- Byzantine peers (rejected by verification)