Experiment 040: Reader Slot Event Port Cleanup

Date: 2026-04-14

Status: Accepted

Problem

The reader pool had accumulated protocol complexity that no longer matched the

real invariants of the system.

At this point the pool already guaranteed:

  1. one in-flight request per worker
  2. synchronous request handling inside the worker isolate
  3. worker respawn on sacrifice

But the implementation still carried extra protocol/state:

That meant more moving pieces in the hot read path than the model actually

needed.

Hypothesis

Collapse the protocol down to the real invariants:

This should:

What Changed

In lib/src/reader/read_worker.dart and lib/src/reader/reader_pool.dart:

per-worker event port

lifecycle

The resulting slot state is now just:

Benchmark

Full suite, 3 repeats:

Comparison summary:

Read-path highlights:

MetricBeforeAfterResult
Point query throughput101,010 qps116,659 qps+15% win
select() maps, 10K rows5.60 ms4.84 ms-14% win
selectBytes(), 1K rows0.50 ms0.49 mswithin noise
selectBytes(), 10K rows6.05 ms5.88 mswithin noise
Concurrent readsmixedmixedwithin noise

The strongest measured signal is improved per-query dispatch overhead, which is

exactly where this cleanup was expected to help.

Decision

Accepted — this is both a code-quality and a performance win.

The protocol now matches the actual semantics of the reader pool more closely,

and the benchmark confirms that the simplification did not trade correctness for

speed. The gain is not dramatic, but it is real, targeted, and comes with less

state to reason about in the slot lifecycle.