v0.2.0 — 2026-03-09
Design Document
This is the canonical technical reference for ItsGoin. It describes the vision, the architecture, and the current state of every subsystem — with full implementation detail. This document is versioned; each update records what changed.
Changelog
v0.2.0 (2026-03-09): Major design updates — three-layer architecture (Mesh/Social/File), N+10 identification, keep-alive sessions, 3-tier revocation, multi-device identity, growth loop redesign, pull sync from social/file layers, relay pipes default to own-device-only, remove anchor register loop.
v0.1.0 (2026-03-09): First versioned edition. Consolidated from ARCHITECTURE.md, code review, and gap analysis into a single source of truth.
1. The Vision
"A decentralized fetch-cache-re-serve content network that supports public and private sharing without a central server. It replaces 'upload to a platform' with 'publish into a swarm' where attention creates distribution, privacy is client-side encryption, and availability comes from caching, not money."
The honest promise: Cold content survives only if someone intentionally keeps hosting it. The system is a loss-risk network — best-effort availability, not durability guarantees.
Guiding principles
- Our distributed network first, direct connections always preferred
- Social graph and friendly UX in front, infrastructure truth in back
- Privacy by design: public profile is minimal, private profiles are per-circle, social graph visibility is controlled
- Don't break content addressing (
PostId = BLAKE3(post), visibility is separate metadata)
- Your feed is yours: reverse-chronological by default, no algorithmic ranking, user-controlled discovery
- Three separate layers — Mesh (structural backbone), Social (follows/audience/DMs), File (content storage/distribution) — each with its own connections and routing
2. Identity & Bootstrap
First startup
- Identity: Load or generate ed25519 keypair from
{data_dir}/identity.key. NodeId = 32-byte public key. A unique device identity is also generated for multi-device coordination (see Section 21).
- Storage: Open SQLite database (
distsoc.db), auto-migrate schema.
- Blob store: Create
{data_dir}/blobs/ with 256 hex-prefix shards (00/ through ff/).
- Bootstrap anchors: Load from
{data_dir}/anchors.json. If missing, use hardcoded default anchor.
- Bootstrap: If peers table is empty, connect to a bootstrap anchor. Request referrals and matchmaking (unless self or the other node is an anchor). Persist on that anchor's referral list until released (at referral count limit) while beginning the growth loop immediately.
Startup cycles
Spawned after bootstrap completes:
| Cycle | Interval | Purpose |
| Pull sync | On demand (3h Self Last Encounter threshold) | Pull new posts from social + upstream file peers |
| Routing diff | 120s (2 min) | Broadcast N1/N2 changes to mesh + keep-alive sessions |
| Rebalance | 600s (10 min) | Clean dead connections, reconnect preferred, signal growth |
| Growth loop | 60s + reactive (on N2/N3 receipt) | Fill empty mesh slots until 101 (90% threshold for reactive mode) |
| Recovery loop | Reactive (mesh empty) | Emergency reconnect via anchors |
| Social/File connectivity check | 60s | Verify <N4 access to N+10 of active social + file peers; open keep-alive sessions as needed |
Removed: Anchor register loop. Anchors are for forming initial mesh connections when bootstrapping, not for ongoing registration. Nodes only connect to anchors during bootstrap or recovery.
3. N+10 Identification
Concept
Every node is identified not just by its NodeId but by its N+10: the node's own NodeId plus the NodeIds of its 10 preferred peers. This accelerates the capacity to find any node — if you can reach any of the 11 nodes in someone's N+10, you can find them.
Where N+10 appears
| Context | What's included |
| Self identification | All self-identification messages include the sender's N+10 |
| Following someone | When you follow a peer, you store and maintain their N+10 in your social routes |
| Post headers | Every post header includes the author's current N+10. Updated whenever they post. |
| Blob headers | Blob/file headers include: (1) the author's N+10, (2) the upstream file source's N+10 (if not the author), (3) N+10s of up to 100 downstream file hosts |
| Recent post lists | Author manifests include the author's N+10 alongside their recent post list |
Why this works
Preferred peers are bilateral agreements — stable, long-lived connections. By including them in identification, any node that can find any of your 10 preferred peers can transitively find you within one hop. This eliminates most discovery cascades for socially-connected nodes.
Status: Partial
N+10 is partially implemented — preferred peers exist and are tracked, but N+10 is not yet included in all identification contexts (post headers, blob headers, self-identification messages). Currently preferred_tree in social routes provides similar functionality for relay selection.
4. Connections & Growth
Connection types
- Mesh connection — long-lived routing slot. Structural backbone for discovery and propagation. DB table:
mesh_peers.
- Keep-alive session — long-lived connection for social or file layer peers that aren't in the mesh 101. Participates in N2/N3 routing. See Section 14.
- Session connection — short-lived, held open for active interaction (DM conversations, group activity, anchor matchmaking).
- Ephemeral connection — single request/response, no slot allocation.
Slot architecture
| Slot kind | Desktop | Mobile | Purpose |
| Preferred | 10 | 3 | Bilateral agreements, eviction-protected |
| Non-preferred | 91 | 12 | Growth loop fills these with diverse peers |
| Total mesh | 101 | 15 | Long-lived routing backbone |
| Keep-alive sessions | No hard limit | No hard limit | Social/file layer peers not in mesh (max 50% of session capacity reserved for keep-alive) |
| Sessions (interactive) | No hard limit | No hard limit | Active DM, group interaction, anchor matchmaking |
| Relay pipes | 10 | 2 | Own-device relay by default; opt-in for relaying for others |
v0.2.0 change: Removed the distinction between "local" (71) and "wide" (20) non-preferred slots. The growth loop goes wide by default. Session counts are no longer hard-limited — an average computer can sustain ~1000 QUIC sessions without strain. The 50% keep-alive reservation ensures sessions remain available for interactive use.
MeshConnection struct
Each mesh connection tracks: node_id, connection (QUIC), slot_kind (Preferred or NonPreferred), remote_addr (captured from Incoming before accept), last_activity (AtomicU64), created_at.
Keepalive
- Interval: 30 seconds (
MeshKeepalive message, 0xE0)
- Zombie detection: No stream activity for 600s (10 min) = zombie, removed in rebalance
last_activity updated on every stream accept
5. Connection Lifecycle
5.1 Growth Loop (60s timer + reactive on N2/N3 receipt)
Timer: Fires every 60 seconds. Checks current mesh count. If < 101, runs a growth cycle.
Reactive trigger: Fires immediately after receiving a peer's N2/N3 list (from initial exchange or routing diff). Continues firing on each new N2/N3 receipt until mesh is 90% full (~91 connections). After 90%, switches to timer-only mode.
Candidate selection (N2 diversity scoring):
score = 1.0 / reporter_count + (0.3 if not_in_N3)
- Fewer reporters = higher diversity = better candidate
- Bonus for locally-discovered peers (not transitive)
- Sorted descending, best candidate tried first
- Growth loop goes wide by default — no local/wide distinction
Connection attempt cascade:
- Direct connect (15s timeout) — use stored/resolved address
- Introduction fallback — find N2 reporters who know this peer, ask each to relay-introduce us
Failure handling: Track consecutive failures. After 3 consecutive failures, back off (break loop, wait for next signal). Mark unreachable peers for future skipping.
5.2 Rebalance Cycle (every 600s)
Executed in priority order:
- Dead connection removal: Remove connections with
close_reason() set, or idle > 600s (zombie)
- Stale entry pruning: N2/N3 entries older than 7 days, social route watchers older than 24 hours
- Priority 0 — Preferred peer reconnection: Iterate
preferred_peers table, reconnect any that are disconnected. If at capacity, evict the lowest-diversity non-preferred peer to make room. Prune preferred peers unreachable for 24+ hours.
- Priority 1 — Reconnect recently dead: Re-establish dropped non-preferred connections
- Priority 2 — Signal growth loop: Fill remaining empty slots via growth loop
- Idle session cleanup: Reap interactive sessions idle > 300s (5 min). Keep-alive sessions are NOT reaped by idle timeout.
- Relay intro dedup pruning: Clear
seen_intros entries older than 30s, cap at 500
Note: Low diversity score alone does NOT trigger eviction. The only eviction path is Priority 0 (making room for a preferred peer).
5.3 Recovery Loop (reactive, mesh empty)
Trigger: disconnect_peer() fires when last mesh connection drops.
- Debounce 2 seconds (wait for cascading disconnects to settle)
- Gather anchors:
known_anchors table (top-5 by success_count) → fallback to anchor_peers list
- For each anchor: connect, request referrals and matchmaking, try direct connect to each referral, fallback to hole punch via anchor for unreachable referrals
- Persist on anchor's referral list until released, begin growth loop immediately
5.4 Initial Exchange (on every new connection)
When two nodes connect, they exchange:
- N+10: Our NodeId + 10 preferred peers' NodeIds
- N1 share: mesh peers + social contacts NodeIds (merged, no addresses)
- N2 share: deduplicated N2 NodeIds (no addresses)
- Profile: PublicProfile (display name, bio, avatar CID,
public_visible flag)
- Delete records: Signed post deletions
- Post IDs: All local post IDs (for replica tracking)
- Peer addresses: N+10 address list for connected peers
Processing: Their N1 → our N2 table (tagged to reporter). Their N2 → our N3 table (tagged to reporter). Store profile, apply deletes, record replica overlaps. Trigger growth loop immediately with new N2/N3 candidates if mesh < 90% full.
5.5 Incremental Routing Diffs (every 120s + on change)
NodeListUpdate (0x01) contains N1 added/removed, N2 added/removed. Sent via uni-stream to all mesh peers and keep-alive sessions. Receiver processes: their N1 adds → our N2 adds, their N2 adds → our N3 adds, etc.
6. Network Knowledge Layers (N1/N2/N3)
| Layer | Source | Contains | Shared? | Stored in |
| N1 | Our connections + social contacts | NodeIds only | Yes (as "N1 share") | mesh_peers + social_routes |
| N2 | Peers' N1 shares | NodeIds tagged by reporter | Yes (as "N2 share") | reachable_n2 |
| N3 | Peers' N2 shares | NodeIds tagged by reporter | Never | reachable_n3 |
<N4 access
A node has <N4 access to a target if the target appears in its N1, N2, or N3 tables. This means the target is reachable within 3 hops without needing worm search or relay introduction. The social/file connectivity check (see Section 14) uses <N4 access to determine whether keep-alive sessions are needed.
What is NEVER shared
- Addresses (resolved on-demand via chain queries)
- N3 entries (search-only, never forwarded)
- Duplication counts (topology leak)
- Which NodeIds are social contacts vs mesh peers (merged in N1 share)
Address resolution cascade (connect_by_node_id)
| Step | Method | Timeout | Source |
| 0 | Social route cache | — | social_routes table (cached addresses for follows/audience) |
| 1 | Peers table | — | Stored address from previous connection |
| 2 | N2 ask reporter | varies | Ask the mesh peer who reported target in their N1 |
| 3 | N3 chain resolve | varies | Ask reporter's reporter (2-hop chain) |
| 4 | Worm search | 3s total | Fan-out to all peers → bloom to wide referrals |
| 5 | Relay introduction | 15s | Hole punch via intermediary relay |
| 6 | Session relay | — | Pipe traffic through intermediary (own-device or opt-in) |
7. Three-Layer Architecture (Mesh / Social / File)
The network operates across three distinct layers, each with its own connections, routing, and purpose. The separation enables specialized behavior without the layers interfering with each other.
| Layer | Purpose | Connections | Sync trigger |
| Mesh | Structural backbone: N1/N2/N3 routing, diversity, discovery | 101 mesh slots (preferred + non-preferred) | N/A — mesh is infrastructure, not content |
| Social | Follows, audience, DMs — the human relationships | Social routes + keep-alive sessions as needed | Pull posts when Self Last Encounter > 3 hours |
| File | Content storage and distribution — blobs, CDN trees | Upstream/downstream file peers + keep-alive sessions as needed | Pull on blob request, push on post creation |
Key principle: mesh is not for content
Pull sync does not pull posts from mesh peers. Mesh connections exist for routing diversity and discovery. Content flows through the social layer (posts from people you follow) and the file layer (blobs from upstream/downstream hosts). This separation means mesh connections can be optimized purely for network topology without social bias.
Cross-layer benefits
Each layer's connections contribute to finding nodes and referrals for the other layers. Keep-alive sessions from the social and file layers participate in N2/N3 routing, which improves <N4 access for all three layers. A social keep-alive session might provide the N2 entry that helps the mesh growth loop find a diverse new peer, and vice versa.
8. Anchors
Intent
Anchors are "just peers that are always on" — standard ItsGoin nodes on stable servers. They run the same code with no special protocol. Their value comes from being consistently available for bootstrapping new nodes into the network and matchmaking (introducing peers to each other).
Each profile can carry a preferred anchor list — infrastructure addresses, not social signals.
Status: Complete (with gaps)
When anchors are used
- Bootstrap: First startup with empty peers table. Connect to anchor, request referrals and matchmaking, persist on referral list while growing mesh.
- Recovery: When mesh drops to 0 connections. Same flow as bootstrap.
- Not ongoing: Nodes do NOT register with anchors on a loop. Anchors are for forming initial connections, not for ongoing presence.
Anchor referral mechanics
When a bootstrapping node connects, the anchor provides referrals from its mesh and referral list. The node persists on the anchor's referral list until released at the referral count limit. During this time, the anchor can matchmake — introducing the new node to other peers requesting referrals.
Session fallback for full anchors
When an anchor's mesh is full (101/101), new nodes fall back to a session connection for matchmaking. The anchor accepts referral requests over session connections, not just mesh.
Remaining gaps
| Gap | Impact |
| Profile anchor lists not used for discovery | Profiles have an anchors field but it's not consulted during address resolution |
| No anchor-to-anchor awareness | Anchors don't discover each other unless they connect through normal mesh growth |
| Bootstrap chicken-and-egg | A fresh anchor with few peers produces few N2 candidates for new nodes. Growth stalls because there's nothing to grow from. |
9. Referrals
Status: Complete
Referral list mechanics (anchor side)
Anchors maintain an in-memory HashMap of registered peers. Each entry: { node_id, addresses, use_count, disconnected_at }.
| Property | Value |
| Tiered usage caps | 3 uses if list < 50, 2 uses at 50+, 1 use at 100+ |
| Disconnect grace | 2 minutes before pruning |
| Sort order | Least-used first (distributes load) |
| Auto-supplement | When explicit list is sparse (< 3 entries), supplement with random mesh peers |
10. Relay & NAT Traversal
Status: Complete
Relay selection (find_relays_for)
Find up to 3 relay candidates, prioritized:
- Preferred tree intersection: Target's
preferred_tree (from social_routes, ~100 NodeIds) intersected with our connections. Prefer our own preferred peers within that tree. TTL=0.
- N2 reporters: Our mesh peers who reported the target in their N1 share. TTL=0.
- N3 via preferred tree: Target's
preferred_tree intersected with N3 reporters. TTL=1.
- N3 reporters: Any N3 reporter for the target. TTL=1.
RelayIntroduce flow (0xB0/0xB1)
- Requester → opens bi-stream to relay, sends
RelayIntroduce { target, requester, requester_addresses, ttl }
- Relay handles three cases:
- We ARE the target: Return our addresses, spawn hole punch to requester
- Target is our mesh peer: Forward request to target on new bi-stream, relay response back. Inject observed public addresses for both parties.
- TTL > 0 and target in our N2: Forward to the reporter with TTL-1 (chain forwarding, max TTL=2)
- Requester receives
RelayIntroduceResult { target_addresses, relay_available }, then:
hole_punch_parallel(): Try all returned addresses in parallel, retry every 2s, 30s total timeout
- If hole punch fails and
relay_available: open SessionRelay (0xB2) pipe through the intermediary
Session relay (relay pipes)
Intermediary splices bi-streams between requester and target. Desktop: max 10 concurrent pipes. Mobile: max 2. Each pipe has a 50MB byte cap and 2-min idle timeout.
v0.2.0 change: Relay pipes are own-device-only by default. A node will only relay traffic between its own devices (same identity key, different device identity). Users can opt in to relaying for others in Settings, but this is not enabled automatically. This prevents nodes from unknowingly burning bandwidth for random peers while still enabling personal multi-device routing.
Deduplication & cooldowns
| Mechanism | Window | Purpose |
seen_intros | 30s | Prevents forwarding loops |
relay_cooldowns | 5 min per target | Prevents relay spamming |
Hole punch mechanics
Parse all returned addresses into QUIC EndpointAddr. Spawn parallel connect attempts to every address simultaneously. Each attempt: 2s timeout, retried until 30s total deadline. First successful connection wins, all others aborted.
11. Worm Search
Status: Complete
Used at step 4 of connect_by_node_id, after N2/N3 resolution fails.
Algorithm
- Build needles: target NodeId + target's N+10 (up to 10 preferred peers from their profile/cached N+10)
- Local check: Search own connections + N2/N3 for any of the 11 needles
- Fan-out (500ms timeout): Send
WormQuery{ttl=0} (0x60) to all mesh peers in parallel. Each peer checks their local connections + N2/N3.
- Bloom round (1.5s timeout): Each fan-out response includes a random "wide referral" peer. Connect to those referrals and send
WormQuery{ttl=1} (they fan-out to their peers with ttl=0).
- Total timeout: 3 seconds for the entire search.
Dedup & cooldown
| Mechanism | Window | Purpose |
seen_worms | 10s | Prevents loops during fan-out |
| Miss cooldown | 5 min (in DB) | Prevents repeated searches for unreachable targets |
12. Preferred Peers
Status: Complete
Negotiation (MeshPrefer, 0xB3)
- Bilateral: Requester sends
MeshPrefer{requesting: true}, responder accepts/rejects
- Acceptance: Both sides persist to
preferred_peers table, upgrade slot to PeerSlotKind::Preferred
- Rejection reasons: "not connected", "preferred slots full (N/M)"
Properties
- Eviction-protected: Never evicted during rebalance (only non-preferred peers can be evicted)
- Priority reconnect: Reconnected first in rebalance (Priority 0), before any growth
- Pruned after 24h unreachable: If a preferred peer can't be reached for 24 hours, it's removed from the preferred list
- N+10 component: Your 10 preferred peers' NodeIds are included in your N+10 for all identification (see Section 3)
- Preferred tree: Each social route caches a
preferred_tree (~100 NodeIds) — the target's preferred peers' preferred peers. Used for relay selection.
13. Social Routing
Status: Complete
Caches addresses for follows and audience members, separate from mesh connections.
social_routes table
| Field | Purpose |
node_id | The social contact's NodeId |
nplus10 | Their N+10 (NodeId + 10 preferred peers) |
addresses | Their known IP addresses |
peer_addresses | Their N+10 contacts (PeerWithAddress list) |
relation | Follow / Audience / Mutual |
status | Online / Disconnected |
last_connected_ms | When we last connected |
reach_method | Direct / Relay / Indirect |
preferred_tree | ~100 NodeIds for relay tree |
Wire messages
| Code | Name | Stream | Purpose |
0x70 | SocialAddressUpdate | Uni | Sent when a social contact's address changes or they reconnect |
0x71 | SocialDisconnectNotice | Uni | Sent when a social contact disconnects |
0x72 | SocialCheckin | Bi | Keepalive with address + N+10 updates |
Reconnect watchers
reconnect_watchers table: when peer A asks about disconnected peer B, A is registered as a watcher. When B reconnects, A gets a SocialAddressUpdate notification. Watchers pruned after 24 hours.
Social route lifecycle
- Follow → store their N+10, upgrade to Mutual (if audience)
- Unfollow → downgrade/remove
- Approve audience → Mutual/Audience
14. Keep-Alive Sessions
Status: Planned
Purpose
When the mesh 101 doesn't provide <N4 access to all the nodes we need for social and file operations, keep-alive sessions bridge the gap. These are long-lived connections that participate in N2/N3 routing but are not part of the mesh 101.
Social/File connectivity check (every 60s)
Periodically check whether we have <N4 access (within N1/N2/N3) to the N+10 of every node we need:
- Nodes we DM'd in the last 4 hours
- All follows
- All audience members
- All file upstream peers (for blobs we host)
- All file downstream peers (for blobs we serve)
For any node whose N+10 is NOT reachable within N3, open a keep-alive session to the closest available node in their N+10 (or to them directly if possible). This ensures we can always find and reach our social and file contacts without worm search.
Keep-alive session behavior
- N2/N3 routing: Keep-alive sessions exchange N1/N2 diffs and participate in routing, similar to mesh connections. They expand our network knowledge without consuming mesh slots.
- Not counted in mesh 101: Keep-alive sessions are a separate pool. They don't affect mesh diversity scoring or slot management.
- Capacity limit: Max 50% of total session capacity is reserved for keep-alive sessions. The other 50% remains available for interactive sessions (DMs, group activity).
- Not idle-reaped: Unlike interactive sessions (5-min idle timeout), keep-alive sessions persist as long as the connectivity need exists.
- Reevaluated periodically: The 60s connectivity check closes keep-alive sessions that are no longer needed (e.g., the target now appears in N3 via a mesh connection).
Cross-layer benefit
Keep-alive sessions from the social and file layers feed N2/N3 entries back into the mesh layer. A social keep-alive to a friend's preferred peer might provide N2 entries that help the mesh growth loop. Similarly, a file keep-alive to an upstream host might provide access to nodes the mesh has never seen. The three layers compound each other's reach.
15. Content Propagation
Intent
"Attention creates propagation": when you view something, you cache it. The cache is optionally offered for serving. Hot content spreads naturally through demand. Cold content decays unless intentionally hosted.
The CDN vision: every file by author X carries an author manifest with the author's N+10 and recent post list. If you hold any file by author X, you passively know X's recent posts and can find X through their N+10.
Status: Partial
BlobRequest/BlobResponse (0x90/0x91) for peer-to-peer blob fetch
- AuthorManifest (ed25519-signed, 10+10 post neighborhood) travels with blob responses
- CDN hosting tree (1 upstream + 100 downstream per blob)
- ManifestPush propagates updates down the tree
- BlobDeleteNotice for tree healing on eviction
- Blob eviction with social-aware priority scoring
What's missing
| Gap | Impact |
| No passive file-chain propagation | AuthorManifest only travels with explicit BlobResponse, not passively. Holding old files by an author doesn't notify you of their new posts. |
| N+10 not yet in file headers | Blob headers should include author N+10, upstream N+10, and downstream N+10s. Currently only AuthorManifest travels with blobs. |
| No "fetch from any peer who has it" | Blobs are fetched from specific peers. No content-addressed routing ("who has blob X?"). |
16. Files & Storage
Blob storage Complete
| Property | Value |
| CID format | BLAKE3 hash of blob data (32 bytes, hex-encoded) |
| Filesystem path | {data_dir}/blobs/{hex[0..2]}/{hex} (256 shards) |
| Metadata table | blobs (cid, post_id, author, size_bytes, created_at, last_accessed_at, pinned) |
| Max blob size | 10 MB |
| Max attachments per post | 4 |
File headers (intent)
Every post and blob carries header information that enables discovery and routing:
- Author's N+10: Updated whenever the author posts
- Upstream file source's N+10: If the source is not the author
- Downstream host N+10s: Up to 100 downstream file hosts, for pushing file changes
- Author's recent post list: Updated on each new post — enables passive discovery of new content
- Self Last Encounter: Stored per-author, becomes the newer of what's stored and "file last update." Determines when to trigger pull sync.
Blob transfer flow (0x90/0x91)
- Requester sends
BlobRequest { cid, requester_addresses }
- Host checks local BlobStore:
- Has blob: Return base64-encoded data + CDN manifest + file header (N+10s, recent posts). Try to register requester as downstream (max 100). If full, return existing downstream as redirect candidates.
- No blob: Return
found: false
- Requester verifies CID, stores blob locally, records upstream in
blob_upstream table. Updates Self Last Encounter for the author based on file header.
CDN hosting tree Complete
- AuthorManifest: ed25519-signed by post author, contains post neighborhood (10 previous + 10 following posts), author N+10, author addresses
- CdnManifest: AuthorManifest + hosting metadata (host NodeId/addresses, source, downstream count)
- Tree structure: Each blob has 1 upstream source + up to 100 downstream hosts
- ManifestPush (
0x94): Author/admin pushes updated manifests downstream, which relay to their downstream
- ManifestRefreshRequest/Response (
0x92/0x93): Check if manifest has been updated since last fetch
- BlobDeleteNotice (
0x95): Notify tree when blob is deleted; includes upstream info for tree healing
Blob eviction Complete
priority = pin_boost + (relationship * heart_recency * freshness / (peer_copies + 1))
| Factor | Calculation |
pin_boost | 1000.0 if pinned, else 0.0. Own blobs auto-pinned. |
relationship | 5.0 (us), 3.0 (mutual follow+audience), 2.0 (follow), 1.0 (audience), 0.1 (stranger) |
heart_recency | Linear decay over 30 days: max(0, 1 - age/30d) |
freshness | 1 / (1 + post_age_days) |
peer_copies | Known replica count (from post_replicas, only if < 1 hour old) |
Hosting quota & pin modes Planned
| Concept | Status |
| 3x hosting quota tracking & enforcement | Not started. Every node must host 3x the bytes they personally posted. |
| Anchor pin vs Fork pin | Not started. Anchor pin = host the original (author retains control). Fork pin = independent copy (you become key owner). |
| Personal vault | Not started. Private durability for saved/pinned items, not counted toward 3x. |
17. Sync Protocol
Wire format
[1 byte: MessageType] [4 bytes: length (big-endian)] [length bytes: JSON payload]
Max payload: 16 MB. ALPN: distsoc/3.
Pull sync: social + file layers, not mesh
v0.2.0 change: Pull sync pulls posts from social layer peers (follows, audience) and upstream file peers, NOT from mesh peers. Mesh connections exist for routing diversity, not content. This separates infrastructure from content flow.
Self Last Encounter: For each peer we sync with, we track the timestamp of our last successful sync. When Self Last Encounter ages beyond 3 hours, a pull sync is triggered. Self Last Encounter is updated to the newer of: (a) what's currently stored, or (b) the "file last update" timestamp from file headers received during blob transfers. Since file headers include the author's recent post list, downloading a blob from any peer hosting that author's content can update Self Last Encounter for the author.
Pull sync filtering
- PullSyncRequest: Includes requester's follow list + post IDs they already have
- PullSyncResponse: Sender filters posts through
should_send_post():
- Author is requester → always send (own posts relayed back)
- Public + author in requester's follows → send
- Encrypted + requester in wrapped key recipients → send
- Otherwise → skip
Message types (37 total)
| Hex | Name | Stream | Purpose |
0x01 | NodeListUpdate | Uni | Incremental N1/N2 diff broadcast |
0x02 | InitialExchange | Bi | Full state exchange on connect |
0x03 | AddressRequest | Bi | Resolve NodeId → address via reporter |
0x04 | AddressResponse | Bi | Address resolution reply |
0x05 | RefuseRedirect | Uni | Refuse mesh + suggest alternative |
0x40 | PullSyncRequest | Bi | Request posts filtered by follows |
0x41 | PullSyncResponse | Bi | Respond with filtered posts |
0x42 | PostNotification | Uni | Lightweight "new post" push to social contacts |
0x43 | PostPush | Uni | Direct encrypted post delivery to recipients |
0x44 | AudienceRequest | Bi | Request audience member list |
0x45 | AudienceResponse | Bi | Audience list reply |
0x50 | ProfileUpdate | Uni | Push profile changes |
0x51 | DeleteRecord | Uni | Signed post deletion |
0x52 | VisibilityUpdate | Uni | Re-wrapped visibility after revocation |
0x60 | WormQuery | Bi | Fan-out search beyond N3 |
0x61 | WormResponse | Bi | Worm search reply |
0x70 | SocialAddressUpdate | Uni | Social contact address changed |
0x71 | SocialDisconnectNotice | Uni | Social contact disconnected |
0x72 | SocialCheckin | Bi | Keepalive + address + N+10 update |
0x90 | BlobRequest | Bi | Fetch blob by CID |
0x91 | BlobResponse | Bi | Blob data + CDN manifest + file header |
0x92 | ManifestRefreshRequest | Bi | Check manifest freshness |
0x93 | ManifestRefreshResponse | Bi | Updated manifest reply |
0x94 | ManifestPush | Uni | Push updated manifests downstream |
0x95 | BlobDeleteNotice | Uni | CDN tree healing on eviction |
0xA0 | GroupKeyDistribute | Uni | Distribute circle group key to member |
0xA1 | GroupKeyRequest | Bi | Request group key for a circle |
0xA2 | GroupKeyResponse | Bi | Group key reply |
0xB0 | RelayIntroduce | Bi | Request relay introduction |
0xB1 | RelayIntroduceResult | Bi | Introduction result with addresses |
0xB2 | SessionRelay | Bi | Splice bi-streams (own-device default) |
0xB3 | MeshPrefer | Bi | Preferred peer negotiation |
0xB4 | CircleProfileUpdate | Uni | Encrypted circle profile variant |
0xC0 | AnchorRegister | Uni | Register with anchor (bootstrap/recovery only) |
0xC1 | AnchorReferralRequest | Bi | Request peer referrals from anchor |
0xC2 | AnchorReferralResponse | Bi | Referral list reply |
0xE0 | MeshKeepalive | Uni | 30s connection heartbeat |
18. Encryption
Envelope encryption (1-layer) Complete
- Generate random 32-byte CEK (Content Encryption Key)
- Encrypt content:
ChaCha20-Poly1305(plaintext, CEK, random_nonce)
- Store as:
base64(nonce[12] || ciphertext || tag[16])
- For each recipient (including self):
- X25519 DH:
our_ed25519_private (as X25519) * their_ed25519_public (as montgomery)
- Derive wrapping key:
BLAKE3_derive_key("distsoc/cek-wrap/v1", shared_secret)
- Wrap CEK:
ChaCha20-Poly1305(CEK, wrapping_key, random_nonce) → 60 bytes per recipient
Visibility variants
| Variant | Overhead | Audience limit |
Public | None | Unlimited |
Encrypted { recipients } | ~60 bytes per recipient | ~500 (256KB cap) |
GroupEncrypted { group_id, epoch, wrapped_cek } | ~100 bytes total | Unlimited (one CEK wrap for the group) |
PostId integrity
PostId = BLAKE3(Post) covers the ciphertext, NOT the recipient list. Visibility is separate metadata. This means visibility can be updated (re-wrapped) without changing the PostId.
Group keys (circles) Complete
- Each circle gets its own ed25519 keypair
group_id = BLAKE3(initial_public_key) — permanent identifier
- Group seed wrapped per-member via X25519 DH (KDF domain:
"distsoc/group-key-wrap/v1")
- Epoch rotation: On member removal, generate new keypair, increment epoch, re-wrap for remaining members
- Wire:
GroupKeyDistribute (0xA0), GroupKeyRequest/Response (0xA1/0xA2)
Three-tier access revocation
Three levels of revocation, chosen based on threat level:
Tier 1: Remove Going Forward (default)
Revoked member is excluded from future posts automatically. They retain access to anything they already received. This is the default behavior when removing a circle member — no special action needed.
When to use: Normal membership changes. Someone leaves a group, you unfollow someone. The common case.
Cost: Zero. Just stop including them in future recipient lists.
Tier 2: Rewrap Old Posts (cleanup)
Same CEK, re-wrap for remaining recipients only. The revoked member can no longer unwrap the CEK even if they later obtain the ciphertext. Propagate updated visibility headers via VisibilityUpdate (0x52).
When to use: Revoked member never synced the post (common with pull-based sync — encrypted posts only sent to recipients). You want to clean up access lists.
Cost: One WrappedKey operation per remaining recipient, no content re-encryption.
Tier 3: Delete & Re-encrypt (nuclear)
Generate new CEK, re-encrypt content, wrap new CEK for remaining recipients, push delete for old post ID, repost with new content but same logical identity. Well-behaved nodes honor the delete.
When to use: Revoked member already has the ciphertext and could unwrap the old CEK. Only for content that poses an actual danger/risk if the revoked member retains access. Recommended against in most cases.
Cost: Full re-encryption + delete propagation + new post propagation. Heavy.
Trust model: The app honors delete requests from content authors by default. A modified client could ignore deletes, but this is true of any decentralized system. For legal purposes: the author has proof they issued the delete and revoked access.
Private profiles (Phase D-4) Complete
Different profile versions per circle, encrypted with the circle/group key. A peer sees the profile version for the most-privileged circle they belong to. CircleProfileUpdate (0xB4) wire message. Public profiles can be hidden (public_visible=false strips display_name/bio).
19. Delete Propagation
Status: Complete
Delete records
DeleteRecord { post_id, author, timestamp_ms, signature } — ed25519-signed by author. Stored in deleted_posts table (INSERT OR IGNORE). Applied: DELETE from posts table WHERE post_id AND author match.
Propagation paths
- InitialExchange: All delete records exchanged on connect
- DeleteRecord message (
0x51): Pushed via uni-stream to connected peers on creation
- PullSync: Included in responses for eventual consistency
CDN cascade on delete
- Send BlobDeleteNotice to all downstream hosts (with our upstream info for tree healing)
- Send BlobDeleteNotice to upstream
- Clean up blob metadata, manifests, downstream/upstream records
- Delete blob from filesystem
20. Social Graph Privacy
Status: Complete
- Follows are never shared in gossip or profiles
- N1 share merges mesh peers + social contacts into one list (indistinguishable)
- No addresses ever shared in routing updates
- N3 is never shared outward (search-only)
Known temporary weakness: An observer who diffs your N1 share over time can infer your social contacts (they're the stable members while mesh peers rotate). This will be addressed when CDN file-swap peers are added to N1, making the stable set larger and harder to distinguish.
21. Multi-Device Identity
Status: Planned
Concept
Multiple devices share the same identity key (ed25519 keypair, same NodeId). All devices ARE the same node from the network's perspective. Posts from any device appear as the same author.
Device identity
Each device also generates a unique device identity (separate ed25519 keypair). This device-specific key is used to:
- Find each other: Devices with the same shared identity can search for each other using their device identities to facilitate syncs and self-routing
- Own-device relay: Route traffic through your own devices (e.g., home computer relaying for your phone) using the device identity for authentication
- Conflict resolution: When devices post simultaneously, device identity helps order and deduplicate
Setup
Export identity.key from one device, import on another. The device identity is generated automatically on each device. Once two devices share an identity key, they can discover each other through normal network routing (same NodeId appears at multiple addresses).
22. Phase 2: Global Reciprocity / QoS
Status: Planned
The MVP is explicitly a "friends-only swarm" that works at small scale. Phase 2 adds:
- Hosting Pledge — signed assertion: "I host ≥ max(3x my posted bytes, 128MB minimum)"
- Random chunk audits — probabilistic proof of storage
- Tit-for-tat QoS — serve contributors first, guests last when overloaded
- Soft enforcement — degrade service gracefully, don't hard-ban NAT/mobile users
"Without Phase 2, MVP network will behave like a friends-only swarm. That's fine — just don't market it as resilient public infra until reciprocity exists."
Appendix A: Timeout Reference
| Constant | Value | Purpose |
| MESH_KEEPALIVE_INTERVAL | 30s | Ping to prevent zombie detection |
| ZOMBIE_TIMEOUT | 600s (10 min) | No activity → dead connection |
| SESSION_IDLE_TIMEOUT | 300s (5 min) | Reap idle interactive sessions (NOT keep-alive) |
| SELF_LAST_ENCOUNTER_THRESHOLD | 10800s (3 hours) | Trigger pull sync when last encounter exceeds this |
| QUIC_CONNECT_TIMEOUT | 15s | Direct connection establishment |
| HOLE_PUNCH_TIMEOUT | 30s | Overall hole punch window |
| HOLE_PUNCH_ATTEMPT | 2s | Per-address attempt within window |
| RELAY_INTRO_TIMEOUT | 15s | Relay introduction request |
| RELAY_PIPE_IDLE | 120s (2 min) | Relay pipe idle before close |
| RELAY_COOLDOWN | 300s (5 min) | Per-target relay cooldown |
| RELAY_INTRO_DEDUP | 30s | Dedup intro forwarding |
| WORM_TOTAL_TIMEOUT | 3s | Entire worm search |
| WORM_FAN_OUT_TIMEOUT | 500ms | Per-peer fan-out query |
| WORM_BLOOM_TIMEOUT | 1.5s | Bloom round to wide referrals |
| WORM_DEDUP | 10s | In-flight worm dedup |
| WORM_COOLDOWN | 300s (5 min) | Miss cooldown before retry |
| REFERRAL_DISCONNECT_GRACE | 120s (2 min) | Anchor keeps peer in referral list after disconnect |
| N2/N3_STALE_PRUNE | 7 days | Remove old reach entries |
| PREFERRED_UNREACHABLE_PRUNE | 24 hours | Remove preferred peers that can't be reached |
| GROWTH_LOOP_TIMER | 60s | Periodic growth loop check |
| CONNECTIVITY_CHECK | 60s | Social/file <N4 access check for keep-alive sessions |
| DM_RECENCY_WINDOW | 14400s (4 hours) | DM'd nodes included in connectivity check |
Appendix B: Design Constraints
| Constraint | Value | Notes |
| Visibility metadata cap | 256 KB | Applies to WrappedKey lists in encrypted posts |
| Max recipients (per-recipient wrapping) | ~500 | 256KB / ~500 bytes JSON per WrappedKey |
| Max blob size | 10 MB | Per attachment |
| Max attachments per post | 4 | |
| Public post encryption overhead | Zero | No WrappedKeys, no sharding, unlimited audience |
| Max payload (wire) | 16 MB | Length-prefixed JSON framing |
| Mesh slots | 101 (Desktop) / 15 (Mobile) | Preferred + non-preferred, no local/wide distinction |
| Keep-alive session cap | 50% of session capacity | Ensures interactive sessions remain available |
Appendix C: Implementation Scorecard
| Area | Status |
| Mesh connection architecture (101 slots, preferred/non-preferred) | Complete |
| N1/N2/N3 knowledge layers | Complete |
| Growth loop (60s timer + reactive on N2/N3) | Partial (timer exists, reactive trigger needs update) |
| Preferred peers + bilateral negotiation | Complete |
| N+10 identification | Partial (preferred peers exist, N+10 not in all headers) |
| Worm search | Complete |
| Relay introduction + hole punch | Complete |
| Session relay (own-device default) | Partial (relay works, own-device restriction not implemented) |
| Social routing cache | Complete |
| Three-layer architecture (Mesh/Social/File) | Partial (layers exist conceptually, pull sync still uses mesh) |
| Keep-alive sessions | Planned |
| Self Last Encounter sync trigger | Planned |
| Algorithm-free reverse-chronological feed | Complete |
| Envelope encryption (1-layer) | Complete |
| Group keys for circles | Complete |
| Three-tier access revocation | Partial (Tier 1+2 work, Tier 3 crypto exists but no UI) |
| Private profiles per circle | Complete |
| Pull-based sync with follow filtering | Complete |
| Push notifications (post/profile/delete) | Complete |
| Blob storage + transfer | Complete |
| CDN hosting tree + manifests | Complete |
| Blob eviction with priority scoring | Complete |
| Anchor bootstrap + referrals | Complete |
| Delete propagation + CDN cascade | Complete |
| Multi-device identity | Planned |
| Content propagation via attention | Partial |
| 3x hosting quota | Planned |
| Phase 2 reciprocity/QoS | Planned |
| Audience sharding | Planned |
| Custom feeds | Planned |
Appendix D: Critical Path Forward
The highest-impact items, in priority order:
1. Three-layer separation (pull sync from social/file, not mesh)
Implement Self Last Encounter tracking and move pull sync to social + upstream file peers. This is the foundation for the layered architecture.
2. N+10 in all identification
Add N+10 (NodeId + 10 preferred peers) to self-identification, post headers, blob headers, and social routes. Dramatically improves findability.
3. Keep-alive sessions
Implement social/file connectivity check and keep-alive sessions for peers not reachable within N3. Cross-layer N2/N3 routing from keep-alive sessions.
4. Growth loop reactive trigger
Fire growth loop immediately on N2/N3 receipt until 90% full. Currently only timer-based.
5. Multi-device identity
Same identity key across devices with device-specific identity for self-discovery and own-device relay.
6. File-chain propagation
Make AuthorManifest with N+10 and recent posts work passively. Enable discovery of new content from any blob holder.
7. Own-device relay restriction
Restrict relay pipes to own-device by default, opt-in for relaying for others.
Appendix E: Features Designed But Not Built
| Feature | Source | Status |
| Three-layer pull sync (social/file, not mesh) | v0.2.0 design | Planned |
| N+10 in all identification & headers | v0.2.0 design | Planned |
| Keep-alive sessions | v0.2.0 design | Planned |
| Multi-device identity | v0.2.0 design | Planned |
| Own-device relay restriction | v0.2.0 design | Planned |
| Self Last Encounter sync trigger | v0.2.0 design | Planned |
| 3x hosting quota tracking & enforcement | project discussion.txt | Planned |
| Anchor pin vs Fork pin distinction | project discussion.txt | Planned |
| Audience sharding for groups > 250 | ARCHITECTURE.md | Planned |
| Repost as first-class post type | project discussion.txt | Planned |
| Custom feeds (keyword/media/family rules) | project discussion.txt | Planned |
| Bounce routing (social graph as routing) | ARCHITECTURE.md | Planned |
| Reactions (pin/thumbs up/thumbs down) | TODO.md | Planned |
| RefuseRedirect handling (retry suggested peer) | protocol.rs | Partial (send-only) |
| Profile anchor list used for discovery | ARCHITECTURE.md | Partial (field exists) |
| File-chain propagation (passive post discovery) | Design | Partial (manifest exists) |
| Anchor-to-anchor gossip/registry | Observed gap | Planned |
Appendix F: File Map
crates/core/
src/
lib.rs — module registration, parse_connect_string, parse_node_id_hex
types.rs — Post, PostId, NodeId, PublicProfile, PostVisibility, WrappedKey,
VisibilityIntent, Circle, PeerRecord, Attachment
content.rs — compute_post_id (BLAKE3), verify_post_id
crypto.rs — X25519 key conversion, DH, encrypt_post, decrypt_post, BLAKE3 KDF
blob.rs — BlobStore, compute_blob_id, verify_blob
storage.rs — SQLite: posts, peers, follows, profiles, circles, circle_members,
mesh_peers, reachable_n2/n3, social_routes, blobs, group_keys,
preferred_peers, known_anchors; auto-migration
protocol.rs — MessageType enum (37 types), ALPN (distsoc/3),
length-prefixed JSON framing, read/write helpers
connection.rs — ConnectionManager: mesh QUIC connections (MeshConnection),
session connections, slot management, initial exchange,
N1/N2 diff broadcast, pull sync, relay introduction
network.rs — iroh Endpoint, accept loop, connect_to_peer,
connect_by_node_id (7-step cascade), mDNS discovery
node.rs — Node struct (ties identity + storage + network), post CRUD,
follow/unfollow, profile CRUD, circle CRUD, encrypted post creation,
startup cycles, bootstrap, anchor register cycle
crates/cli/
src/main.rs — interactive REPL: post, feed, circles, connect, sync, etc.
crates/tauri-app/
src/lib.rs — Tauri v2 commands (38 IPC handlers), DTOs
frontend/
index.html — single-page UI: 5 tabs (Feed / My Posts / People / Messages / Settings)
app.js — Tauri invoke calls, rendering, identicon generator, circle CRUD
style.css — dark theme, post cards, visibility badges, transitions
License
ItsGoin is released under the Apache License, Version 2.0. You may use, modify, and distribute this software freely under the terms of that license.
This is a gift. Use it well.
13. Social Routing
Status: Complete
Caches addresses for follows and audience members, separate from mesh connections.
social_routestablenode_idnplus10addressespeer_addressesrelationstatuslast_connected_msreach_methodpreferred_treeWire messages
0x700x710x72Reconnect watchers
reconnect_watcherstable: when peer A asks about disconnected peer B, A is registered as a watcher. When B reconnects, A gets aSocialAddressUpdatenotification. Watchers pruned after 24 hours.Social route lifecycle