Core Concepts
This page explains the foundational ideas behind Synchro: how changes are captured, how data is partitioned, how clients stay in sync, and how conflicts are resolved.
WAL-Based Change Detection
Section titled “WAL-Based Change Detection”Synchro captures data changes through PostgreSQL logical replication, not triggers. When your application writes to a synced table, PostgreSQL records the change in the Write-Ahead Log (WAL). A long-running Synchro consumer decodes these WAL events and writes structured entries into the sync_changelog.
graph LR
A[App Write] --> B[PostgreSQL WAL]
B --> C[Synchro Consumer]
C --> D[sync_changelog]
Why not triggers?
| WAL Replication | Triggers | |
|---|---|---|
| Transaction overhead | Zero — WAL is written regardless | Trigger function executes inside every write transaction |
| Maintenance | One consumer process | Trigger DDL on every synced table; must be recreated on schema changes |
| Decoupling | Consumer can lag, restart, or crash without affecting the application | Trigger failure can abort the application transaction |
| Visibility | Captures all changes including those from migrations, backfills, and direct SQL | Only captures changes routed through the trigger |
The consumer connects using PostgreSQL’s replication protocol (replication=database connection parameter) and subscribes to a named publication. It processes events in LSN order, persists its position for crash recovery, and sends standby heartbeats to prevent slot invalidation.
Buckets and Subscriptions
Section titled “Buckets and Subscriptions”Every changelog entry is tagged with a bucket ID. Buckets are the unit of data partitioning — they determine which changes a client receives during pull.
How Buckets Are Assigned
Section titled “How Buckets Are Assigned”When the WAL consumer processes a change, it calls the configured BucketAssigner to determine which bucket(s) the record belongs to.
The default JoinResolver uses the registry metadata:
- Tables with an
OwnerColumn— Bucket ID isuser:<owner_column_value>. A task owned by userabc-123goes into bucketuser:abc-123. - Child tables (via
ParentTable) — The resolver walks the parent chain up to the root table and uses the root’s owner column. - Tables without ownership — Records go into the
globalbucket, visible to all clients. - Custom resolvers — Implement the
BucketAssignerinterface for multi-tenant, team-based, or content-sharing bucket strategies.
Client Subscriptions
Section titled “Client Subscriptions”When a client registers, the server computes its bucket subscriptions (typically ["user:<user_id>", "global"]). During pull, only changelog entries matching the client’s subscriptions are returned.
Checkpoints
Section titled “Checkpoints”The sync_changelog table uses a monotonically increasing BIGSERIAL column (seq) as its cursor. Each client tracks its checkpoint — the highest seq value it has processed.
How Checkpoints Work
Section titled “How Checkpoints Work”- Client sends a pull request with its current checkpoint (e.g.,
checkpoint: 500). - Server queries
sync_changelogfor entries whereseq > 500matching the client’s bucket subscriptions. - Server returns the changes along with the new checkpoint (e.g.,
checkpoint: 742). - Client applies the changes locally and stores
742as its new checkpoint.
Properties
Section titled “Properties”- Idempotent advancement — Checkpoint only moves forward. Re-sending the same checkpoint returns the same changes. The server enforces
last_pull_seq < new_seqon update. - Per-client isolation — Each client has its own checkpoint. Slow clients do not block fast ones.
- Compaction boundary — If changelog compaction is enabled, entries below a retention threshold are deleted. If a client’s checkpoint falls behind the compaction boundary, the server responds with
snapshot_required: trueand the client must re-bootstrap via the snapshot endpoint.
Conflict Resolution
Section titled “Conflict Resolution”Conflicts occur when a client pushes a change to a record that was modified on the server since the client last pulled it.
Strategies
Section titled “Strategies”Synchro supports three conflict resolution strategies:
LWW (Last-Write-Wins)
Section titled “LWW (Last-Write-Wins)”The default strategy. Compares the client’s client_updated_at timestamp against the server’s updated_at, adjusting for configurable clock skew tolerance.
engine, _ := synchro.NewEngine(synchro.Config{ DB: db, Registry: registry, ClockSkewTolerance: 5 * time.Second, // favour the client within 5s})If the client provides a base_updated_at (optimistic concurrency), the resolver first checks whether the server record changed since that base version:
- Server unchanged since base — Client wins (no true conflict).
- Server changed since base — Falls back to timestamp comparison.
ServerWins
Section titled “ServerWins”The server version always wins. Client changes are rejected with a conflict status and the current server version is returned so the client can reconcile.
engine, _ := synchro.NewEngine(synchro.Config{ DB: db, Registry: registry, ConflictResolver: &synchro.ServerWinsResolver{},})Custom
Section titled “Custom”Implement the ConflictResolver interface for domain-specific logic.
type ConflictResolver interface { Resolve(ctx context.Context, conflict Conflict) (Resolution, error)}The Conflict struct provides full context: table name, record ID, client/server data as JSON, timestamps, and the client’s base version.
Push Flow
Section titled “Push Flow”sequenceDiagram
participant Client
participant Server
participant DB as PostgreSQL
Client->>Server: POST /sync/push (changes)
Server->>DB: BEGIN + SET LOCAL app.user_id
loop Each change
Server->>DB: Read current server version
alt No conflict
Server->>DB: Apply change
Server-->>Server: Status: applied
else Conflict detected
Server->>Server: ConflictResolver.Resolve()
alt Client wins
Server->>DB: Apply client change
Server-->>Server: Status: applied
else Server wins
Server-->>Server: Status: conflict + server version
end
end
end
Server->>DB: COMMIT
Server-->>Client: accepted[] + rejected[]
Each push is processed in a single database transaction under RLS context. The client receives per-record results: applied, conflict (with the current server version), rejected_terminal, or rejected_retryable.
Schema Governance
Section titled “Schema Governance”Synchro enforces server-authoritative schema. The server computes a canonical schema from pg_catalog for all registered tables and produces a versioned hash. Clients must present a matching version and hash on every request.
How It Works
Section titled “How It Works”- Server computes schema — On first request, the server reads column definitions from
pg_catalog, computes a SHA-256 hash, and persists it insync_schema_manifestwith an auto-incrementing version. - Client receives schema on registration — The register response includes
schema_versionandschema_hash. - Handshake on every request — Push, pull, and snapshot requests include the client’s
schema_versionandschema_hash. The server compares them against the current manifest. - Mismatch handling — If the client’s schema does not match, the server returns HTTP
409 Conflictwith the current server version and hash. The client re-fetches the schema viaGET /sync/schemaand migrates its local SQLite tables.
graph TD
A[Client sends request] --> B{schema_version + schema_hash match?}
B -->|Yes| C[Process normally]
B -->|No| D[409 Conflict]
D --> E[Client calls GET /sync/schema]
E --> F[Client migrates local tables]
F --> G[Client retries request]
Sync Lifecycle
Section titled “Sync Lifecycle”The full lifecycle from first connection to steady-state sync:
sequenceDiagram
participant Client
participant Server
Client->>Server: POST /sync/register
Server-->>Client: client_id, schema_version, schema_hash
Client->>Server: GET /sync/schema
Server-->>Client: table definitions, columns, types
Note over Client: Create/migrate local SQLite tables
Client->>Server: POST /sync/snapshot (page 1)
Server-->>Client: records, cursor, has_more=true
Client->>Server: POST /sync/snapshot (page N)
Server-->>Client: records, has_more=false, checkpoint
Note over Client: Bootstrap complete
loop Sync Loop
Client->>Server: POST /sync/push (pending changes)
Server-->>Client: accepted/rejected results
Client->>Server: POST /sync/pull (checkpoint)
Server-->>Client: changes, deletes, new checkpoint
end
Phase 1: Registration
Section titled “Phase 1: Registration”The client registers with the server, providing its client_id, platform, and app_version. The server creates or updates the client record and returns the current schema version and hash.
Phase 2: Schema Sync
Section titled “Phase 2: Schema Sync”The client fetches the full schema definition (GET /sync/schema) which includes every synced table’s columns, types, primary key, push policy, and parent relationships. The client uses this to create or migrate local SQLite tables.
Phase 3: Snapshot Bootstrap
Section titled “Phase 3: Snapshot Bootstrap”The client pages through a full snapshot of its subscribed data. The snapshot endpoint returns records in table-dependency order with a stateless cursor for pagination. When the final page arrives (has_more: false), the client stores the checkpoint and transitions to incremental sync.
Phase 4: Incremental Sync
Section titled “Phase 4: Incremental Sync”The client enters a push-then-pull loop:
- Push — Send any locally queued changes. The server returns per-record results.
- Pull — Send the current checkpoint. The server returns all changes since that checkpoint, along with deletes and a new checkpoint value.
This loop runs on a configurable interval (typically 5-30 seconds) and on-demand when the user makes local changes.
Client-Side CDC
Section titled “Client-Side CDC”Client SDKs use SQLite triggers to automatically track local changes. There is no special write API — applications use normal SQL INSERT, UPDATE, and DELETE statements against their local tables.
How It Works
Section titled “How It Works”For each synced table, the SDK creates three SQLite triggers:
| Trigger | Purpose |
|---|---|
AFTER INSERT | Records the new row ID and table name in the pending changes queue |
AFTER UPDATE | Records the updated row ID in the pending changes queue |
BEFORE DELETE | Converts the hard delete into a soft delete (SET deleted_at = datetime('now')) and records it in the pending queue |
Sync Lock
Section titled “Sync Lock”During pull application, the SDK sets a sync lock flag that disables CDC triggers. This prevents incoming server changes from being re-queued as pending pushes, which would create an infinite echo loop.
Pending Changes Queue
Section titled “Pending Changes Queue”The pending queue is a local SQLite table that tracks which records have been modified since the last push. During push, the SDK:
- Reads all pending entries.
- Hydrates each entry by reading the current row from the local table.
- Sends the hydrated changes to the server.
- On success, drains the acknowledged entries from the queue.