Skip to content

Instantly share code, notes, and snippets.

View zmanian's full-sized avatar

Zaki Manian zmanian

View GitHub Profile
@zmanian
zmanian / gist:a24ca70cffc7848c951c878eabe273bc
Last active February 18, 2026 00:14
Sender-Controlled Transaction Privacy & MEV Supply Chain via Mosaik Streams (with code sketch)

Sender-Controlled Transaction Privacy & MEV Supply Chain via Mosaik Streams

Design notes and working code sketch for integrating priority auctions, competitive builder marketplaces, and sender-controlled transaction privacy into Mosaik-based consensus architectures (Commonware Simplex, CometBFT).

Code: zmanian/commonware-mempool (compiles and runs)

Context

Mosaik provides typed Producer<T> / Consumer<T> streams with tag-based discovery and subscribe_if predicate re-evaluation. Combined with Commonware Simplex BFT, this gives us a dual-stack architecture where consensus traffic runs on Commonware's authenticated P2P and transaction dissemination runs on Mosaik streams.

@zmanian
zmanian / mosaik-mempool-analysis.md
Created February 16, 2026 05:09
Mosaik as a Framework for Typed Mempools with Integrated Order Matching -- exploring whether Mosaik can replace CometBFT/Commonware mempool patterns

Mosaik as a Framework for Typed Mempools with Integrated Order Matching

Exploring whether Mosaik -- Flashbots' self-organizing p2p runtime -- could serve as the foundation for a high-performance mempool (in the style of Commonware or CometBFT) that collapses transaction dissemination, order matching, and block production into a single pipeline.

The Traditional Mempool Pipeline

In CometBFT and most blockchain architectures, the pipeline is segmented:

Tx submission -&gt; Mempool gossip (opaque bytes) -&gt; Block proposer selects txs
@zmanian
zmanian / mosaik-intents-analysis.md
Created February 16, 2026 05:09
Mosaik as an Intent Settlement Coordination Layer -- analysis of how Mosaik's p2p primitives map onto orderflow/solver/auction systems and NEAR Intents

Mosaik as an Intent Settlement Coordination Layer

Analysis of Mosaik (v0.2.1) -- a Rust runtime for self-organizing, leaderless distributed systems built on iroh (QUIC-based p2p networking) -- and how its primitives map onto intent-based orderflow systems like NEAR Intents.

What Mosaik Is

Nodes deployed on plain VMs self-organize into a functioning topology using just a secret key, a gossip seed, and role tags. No orchestration needed. Four subsystems:

  • Discovery -- gossip-based peer announcement + catalog sync
  • Streams -- typed pub/sub data channels (producer/consumer)
@zmanian
zmanian / gist-near-intents.md
Created February 16, 2026 04:54
Decentralized NEAR Intents Settlement on Mosaik

Decentralized NEAR Intents Settlement on Mosaik

Problem

NEAR Intents (formerly Defuse Protocol) coordinates intent-based settlement across 18+ chains. The current architecture relies on a centralized Solver Relay (solver-relay-v2.chaindefuser.com) as the sole coordination point between users and solvers. This is a single point of failure, a censorship vector, and a trust assumption on the relay operator.

Real NEAR Intents Protocol

Token Diff Model

@zmanian
zmanian / gist-cometbft-mempool.md
Created February 16, 2026 04:54
Proposer-Aware Mempool for CometBFT

Proposer-Aware Mempool for CometBFT

Problem

CometBFT uses flood gossip for mempool propagation: every validator receives every transaction regardless of whether it is the current block proposer. In a 100-validator network, 99 validators maintain redundant copies of the mempool that they will never use to build a block during the current slot. This wastes bandwidth and creates unnecessary load.

Insight

The proposer schedule in CometBFT is fully deterministic. All validators independently compute the same proposer for any (height, round) pair using weighted round-robin -- no communication needed. This means we can route transactions preferentially to upcoming proposers without any new consensus messages.

@zmanian
zmanian / mosaik-mempool-analysis.md
Last active February 16, 2026 15:56
Mosaik as a Framework for Typed Mempools with Integrated Order Matching -- exploring whether Mosaik can replace CometBFT/Commonware mempool patterns

Mosaik as a Framework for Typed Mempools with Integrated Order Matching

Exploring whether Mosaik -- Flashbots' self-organizing p2p runtime -- could serve as the foundation for a high-performance mempool (in the style of Commonware or CometBFT) that collapses transaction dissemination, order matching, and block production into a single pipeline.

The Traditional Mempool Pipeline

In CometBFT and most blockchain architectures, the pipeline is segmented:

Tx submission -&gt; Mempool gossip (opaque bytes) -&gt; Block proposer selects txs
@zmanian
zmanian / mosaik-intents-analysis.md
Last active February 16, 2026 05:05
Mosaik as an Intent Settlement Coordination Layer -- analysis of how Mosaik's p2p primitives map onto orderflow/solver/auction systems and NEAR Intents

Mosaik as an Intent Settlement Coordination Layer

Analysis of Mosaik (v0.2.1) -- a Rust runtime for self-organizing, leaderless distributed systems built on iroh (QUIC-based p2p networking) -- and how its primitives map onto intent-based orderflow systems like NEAR Intents.

What Mosaik Is

Nodes deployed on plain VMs self-organize into a functioning topology using just a secret key, a gossip seed, and role tags. No orchestration needed. Four subsystems:

  • Discovery -- gossip-based peer announcement + catalog sync
  • Streams -- typed pub/sub data channels (producer/consumer)
@zmanian
zmanian / 2026-02-13-guest-confinement-design.md
Created February 14, 2026 06:25
Endo Guest Confinement Design: Deny identifiers, work in petnames and live values

Guest Confinement: Deny Identifiers, Work in Petnames and Live Values

Motivation

From Kris Kowal:

We should probably deny "guests" the use and visibility into identifiers/locators and just make them work in terms of petnames and live values. That may require us to introduce E(guest).equals(a, b) and to create more methods that operate on values (where the engine looks up the identifier behind the scenes). This would be consistent with MarkM's "distributed confinement" notions, where the confined applications don't get to see swissnums and addresses. This is also consistent with our value proposition: Giving an LLM cryptographic data is bad. Providing fake cryptographic data as a stand-in for real cryptographic data is cute, but why do that when you can give the LLM the ability to choose its own names and never expose them to large, immemorable numbers?

Why this matters

Merging daemon-collector into the LLM Branch

Context

The llm-long-running-tasks branch is our main work spike for LLM agent infrastructure on Endo. It includes a llamadrome package (multi-backend LLM agent with Anthropic/Ollama), conversation persistence across daemon restarts, and a set of new daemon verbs for capability-separated evaluation: define, endow, form, requestEvaluation, and storeValue. The idea is that an LLM guest can propose code with named slots, the host endows those slots with capabilities, and the code runs in a sandbox with host approval.

I've looked at the daemon-reply and daemon-collector branches and want to merge that work into the LLM branch so we're building on the latest daemon internals rather than diverging further.

Plan

Ax + Endo: DSPy-style LLM Programs in an Object-Capability Runtime

Background

Ax (@ax-llm/ax)

Ax is a TypeScript DSPy implementation for building production LLM applications. Instead of writing prompts, you define signatures -- declarative input/output contracts -- and Ax handles prompt generation, structured output extraction, validation, retries, and optimization.

Key capabilities:

  • Signatures: 'review:string -&gt; sentiment:class "positive, negative, neutral"' -- no prompt engineering