Section 02 — Technology

ConvergenceX
Proof-of-Work

A memory-hard, CPU-friendly, ASIC-resistant proof-of-work algorithm built on verifiable dynamical-system certificates. Three-layer architecture: base PoW, ASERT difficulty targeting, and cASERT progressive hardening overlay.

CPU-MINED | 4GB dataset cache · 256-op program · 100K rounds · No GPU advantage
// 00 — ARCHITECTURE

Three-Layer PoW Stack

Layer 1
ConvergenceX v2.0
Memory-hard hash
4GB dataset cache
256-op program + 100K rounds
Layer 2
ASERT
Difficulty targeting
24h half-life
Q16.16 encoding
Layer 3
cASERT
Progressive hardening
L1–L5+ unbounded
k=4, steps=4
Output
Block ID
commit = CX(header)
block_id = SHA256(hdr‖commit)
target from bits_q

The three layers are independent but compose: ConvergenceX produces the hash, ASERT sets the target difficulty, and cASERT modulates ConvergenceX parameters when the chain runs ahead of schedule. Each layer can be analyzed and verified separately.

// 01 — CONVERGENCEX

ConvergenceX v2.0 Algorithm

Memory-Hard Proof-of-Work
CONSENSUS-CRITICAL

ConvergenceX v2.0 is a memory-hard, CPU-friendly proof-of-work algorithm with two key improvements over v1.0: a persistent dataset cache (4GB generated once per block from prev_block_hash, reused across all nonce attempts) and per-block program generation (256-operation program derived from block hash, executed within each iteration). CPU miners benefit from dataset cache reuse; ASICs cannot reduce the memory requirement or pre-optimize for the random program that changes every block.

// ConvergenceX v2.0 — dataset cache + per-block program dataset = generate_or_reuse(4096 MB, prev_block_hash) // persistent 4GB cache program = derive_program(block_hash, 256 ops) // unique per block for round = 0 .. 99,999: page_idx = f(state, round) mod num_pages state = mix(state, scratchpad[page_idx]) prog_out = program.execute(dataset[round mod size], round) state = state XOR prog_out // program output mixed in scratchpad[page_idx] = transform(state) commit = finalize(state) // 32-byte PoW output block_id = SHA256(full_header ‖ commit)
CX_SCRATCH_M4096 MB // 4GB dataset per block (persistent cache, v2.0)
CX_ROUNDS_M100,000 sequential iterations
CX_PROGRAM256 ops // per-block program: MUL/XOR/ADD/ROT/AND/OR/NOT/SUB (v2.0)
CX_N32 // internal state width
CX_LR_SHIFT18 // base learning-rate shift
CX_LAM100 // convergence lambda parameter
CX_CP_M6250 checkpoints // 100000 / 16
DatasetGenerated once per block, reused across all nonce attempts (v2.0)
ASIC resistanceRandom program changes every block — no fixed circuit optimization
GPU resistance4GB VRAM per thread — no pipelining advantage
// 01.1 — WORKED EXAMPLE

Worked Example — Block #1

Mining Pipeline Step-by-Step
EDUCATIONAL

Concrete walkthrough of mining Block #1 with genesis parameters: prev_hash = 0000...0000, bitsQ = 353,075 → difficulty = 5.3875

// Generate 4GB persistent dataset from prev_block_hash seed = epoch_scratch_key(epoch=0) memory[0] = hash(seed) memory[i] = hash(memory[i-1]) // 512M entries, sequential // Cost: ~30s on modern CPU — ASICs cannot skip this step
// Derive 256-op program from block_hash program[0] = ops[block_hash[0] % 8] // e.g. MUL imm=0xA3F2... program[1] = ops[block_hash[1] % 8] // e.g. XOR imm=0x17C8... // ...256 ops derived deterministically
state = dataset[nonce % 512M] state = program.execute(state, step=0) // MUL operation scratchpad[page] = transform(state)
checkpoint[0] = SHA256(state || iteration) // → committed to Merkle tree (16 total checkpoints)
// After 100,000 iterations: for k = 0..3: x_perturbed = x + random_perturbation(ε) for s = 0..3: x_perturbed = gradient_step(x_perturbed) assert |x_perturbed - x_original| < 180 // STABLE ✓
commit = finalize(state || checkpoints) if commit < target(bitsQ=353075): // BLOCK FOUND ✓ else: nonce++ // repeat from Step 3

Steps 1 & 2 execute once per block. Only Steps 3–6 repeat per nonce attempt. This is the core v2.0 efficiency gain.

// 01.2 — ASIC RESISTANCE

ASIC Resistance Analysis

ConvergenceX v2.0 vs Bitcoin SHA-256
Attack Vector Bitcoin SHA-256 ConvergenceX v2.0
Custom silicon Trivial Requires 4GB on-chip memory
Parallel hashing Easy Sequential dataset dependency
Fixed circuit Optimal Program changes every block
Memory reduction N/A Checkpoints catch shortcuts
GPU advantage Moderate Penalized by memory latency
CPU advantage None L3 cache reuse, sequential access

An ASIC for ConvergenceX v2.0 would require: 4GB LPDDR5 on-package, a general-purpose ALU for 8 operation types, and full re-fabrication for every block. This is economically equivalent to building a CPU — the entire point of the design.

// 01.3 — SOSTCOMPACT

SOSTCompact Q16.16 Decoding

Difficulty Encoding Example
ENCODING
// bitsQ = 353,075 integer = 353075 >> 16 = 5 fraction = 353075 & 0xFFFF = 25,395 real = 5 + 25395/65536 = 5.3875
MIN bitsQ = 65,536 → difficulty 1.0000 // network near-dead
GENESIS bitsQ = 353,075 → difficulty 5.3875 // genesis block
MAX bitsQ = 16,711,680 → difficulty 255.000 // theoretical max

ASERT clamp per block:
Max DOWN: −2 units → ≈ −3.0% per block
Max UP: +3 units → ≈ +4.6% per block
After 24h of 2× faster blocks → difficulty doubles

// 02 — ASERT

ASERT Difficulty

Exponential Moving Average Targeting
CONSENSUS-CRITICAL

ASERT (Absolutely Scheduled Exponentially Rising Targets) adjusts difficulty every block based on the time elapsed since an anchor block. Unlike legacy DAA systems, ASERT has no discrete retarget windows — it responds continuously to hashrate changes with a 24-hour half-life.

AlgorithmASERT (aserti3-2d variant)
ASERT_HALF_LIFE86400 seconds // 24 hours
TARGET_BLOCK_TIME600 seconds // 10 minutes
AdjustmentEvery block — no retarget windows
ResponseHalves difficulty if hashrate halves (over 24h)
AnchorGenesis block (height 0, time, bits_q)
SOSTCompact Q16.16 Encoding
ENCODING

SOST uses a custom compact difficulty encoding called SOSTCompact. Instead of Bitcoin's nBits (mantissa/exponent), SOST uses Q16.16 fixed-point: the upper 16 bits are the integer part and the lower 16 bits are the fractional part. This gives continuous sub-integer difficulty precision without floating-point ambiguity.

// SOSTCompact Q16.16 — difficulty encoding bits_q = uint32_t // stored in block header integer_part = bits_q >> 16 // upper 16 bits fractional = bits_q & 0xFFFF // lower 16 bits real_difficulty = integer_part + fractional / 65536.0 // Example: bits_q = 0x00028000 // integer = 2, fractional = 32768/65536 = 0.5 // real difficulty = 2.5
FormatQ16.16 fixed-point unsigned
Range0.0 to 65535.99998
Precision1/65536 ≈ 0.0000153
Storageuint32_t in block header (4 bytes)
AdvantageDeterministic, no floating-point rounding across platforms
// 03 — cASERT

cASERT Overlay

Progressive Hardening System
CONSENSUS-CRITICAL

cASERT (convergent ASERT) is a deterministic overlay that modulates ConvergenceX parameters when the chain runs ahead of expected schedule. It measures how many blocks the chain is “ahead” (actual_height − expected_height) and activates progressive hardening levels. Hardening is unidirectional — it only increases computational cost, never decreases it.

// blocks_ahead = chain_height - expected_height_from_elapsed_time if blocks_ahead < 5: level = L1 // WARMUP — no scaling elif blocks_ahead < 26: level = L2 // scale = 2 elif blocks_ahead < 51: level = L3 // scale = 3 elif blocks_ahead < 76: level = L4 // scale = 4 elif blocks_ahead < 101: level = L5 // scale = 6 else: level = L6+ // scale = unbounded // Parameters scaled by level stab_scale = level + 1 // unbounded above L4 stab_k = 4 // fixed stab_steps = 4 // fixed stab_margin = 180 // fixed
CASERT_L2_BLOCKS5 blocks ahead // scale = 2
CASERT_L3_BLOCKS26 blocks ahead // scale = 3
CASERT_L4_BLOCKS51 blocks ahead // scale = 4
CASERT_L5_BLOCKS76 blocks ahead // scale = 6
CASERT_L6+101+ blocks ahead // scale = unbounded (grows every 50 blocks)
CX_STB_K4 // stability constant
CX_STB_STEPS4 // convergence steps per iteration
CX_STB_MARGIN180 // stability margin parameter
CX_STB_LR20 // LR_SHIFT + 2
DirectionUnidirectional — hardening only, never easing
DECAY ANTI-STALL (v5.1)

If no block is found for 2 hours (7200s), cASERT level decays downward to prevent chain stall:

L8+drops 1 level every 10 min // fast recovery from extreme levels
L4–L7drops 1 level every 20 min // medium recovery
L2–L3drops 1 level every 30 min // cautious near neutral
L1floor — no further decay
ScopeMining only — block validation always uses raw schedule level

When mining resumes, decay stops instantly — raw schedule level takes over. Example: L14 (525 ahead) → 2h wait + L14→L8 (60m) + L8→L4 (80m) + L4→L1 (90m) = 5h50m to neutral.

L1 — Warmup
<5 blocks ahead
scale=1, baseline CX parameters
Normal mining conditions
L2 — Elevated
5–25 blocks ahead
scale=2, moderate hardening
Increased computational cost
L3 — Active
26–50 blocks ahead
scale=3, significant hardening
Fast-chain response engaged
L4 — High
51–75 blocks ahead
scale=4, heavy hardening
Sustained fast-chain pressure
L5 — Emergency
76–100 blocks ahead
scale=6, emergency hardening
Severe fast-chain response
L6+ — Unbounded
101+ blocks ahead
scale=unbounded (grows every 50 blocks)
Maximum progressive response
// 04 — BLOCK STRUCTURE

Block Header

Header Fields & PoW Construction
CONSENSUS

The block header contains a 72-byte hash-commitment (hc72) plus additional fields for checkpoints, nonce, and extra_nonce. The PoW is computed over the full header and the resulting commit hash determines block validity against the target.

prev_hash32 bytes — SHA256 of previous block
merkle_root32 bytes — Merkle root of all transactions
timestamp4 bytes — Unix seconds (uint32 in hc72)
bits_q4 bytes — SOSTCompact Q16.16 difficulty
checkpoints_root32 bytes — cASERT checkpoint Merkle root
nonce4 bytes — miner-varied PoW nonce
extra_nonce4 bytes — additional nonce space
commit32 bytes — ConvergenceX output
block_idSHA256(full_header ‖ commit)
Validityblock_id ≤ target(bits_q)
// 05 — DESIGN RATIONALE

Why ConvergenceX

Design Principles
CPU fairness4GB dataset cache requires commodity hardware. Persistent per-block — CPU miners benefit from reuse.
Sequential hardness100,000 rounds with data-dependent memory accesses + per-block 256-op program. No meaningful parallelism.
Verification efficiencyCheckpoints every 16 rounds (6,250 total) allow fast verification without repeating full computation.
Deterministic difficultyQ16.16 fixed-point encoding eliminates cross-platform floating-point divergence.
Adaptive responsecASERT overlay provides real-time hardening without protocol-level forks or governance votes.
SimplicityThree layers, each with well-defined responsibilities. No opaque multi-algorithm mixing.
// 06 — FINANCIAL PRIMITIVES

No Smart Contracts — Native Primitives

Design Philosophy
NO VM

SOST does not support smart contracts — no virtual machine, no user-deployed bytecode, no Turing-complete execution. Instead, purpose-built transaction types provide financial primitives directly in the consensus layer. Deterministic, auditable, and not exploitable through contract bugs — they are protocol rules, not programs.

Output TypeCodePurposeStatus
OUT_BOND_LOCK0x10Time-locked bond for PoPC Model AReserved (Phase 1)
OUT_ESCROW_LOCK0x11Time-locked escrow for PoPC Model BReserved (Phase 1)
OUT_BURN0x20Reserved — NOT activatedNo activation planned
OUT_TOKEN_ISSUETBDNative metal-backed token issuanceFuture (Phase 2)
OUT_TOKEN_TRANSFERTBDNative metal-backed token transferFuture (Phase 2)

This approach follows the model established by Ravencoin (native assets on UTXO without VM) and is consistent with Bitcoin's OP_CHECKLOCKTIMEVERIFY — consensus rules, not smart contracts. The supply of SOST is immutable by construction. No minting, burning, or destruction mechanism exists in the protocol.

// SOURCE CODE LICENSE

SOST source code is published under the MIT License. The code is fully open-source and free to use, modify, and distribute. See LICENSE for full terms.