Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

The Agglayer Knowledge Base is the human-first reference for architecture, domain concepts, and contributor workflows in this repository. It is intended to be useful for both maintainers and AI coding agents, with clarity and operational correctness as the primary goals.

What this book covers

  • Core terms and protocol language used across Agglayer.
  • Crate and domain ownership, including certificate lifecycle and settlement flow.
  • Safety-sensitive subsystems, such as pessimistic proof, protobuf boundaries, and storage layout.
  • Contributor workflows for AI-agent configuration and docs publishing.

How to use it

  1. Start with Glossary and Architecture to build shared context.
  2. Use domain chapters for deep dives: Pessimistic Proof, Protobuf and gRPC, and Storage.
  3. Use workflow chapters for contributor operations: AI Agent Configuration and Documentation Publishing.

Editorial expectations

  • Keep content factual, concise, and linked to concrete code paths where relevant.
  • Prefer explicit safety invariants, failure modes, and verification guidance over vague descriptions.
  • Update Summary when adding or restructuring chapters.

Glossary

This glossary defines the core terms used across Agglayer. Use it together with Architecture, Pessimistic Proof, Protobuf and gRPC, and Storage.

Aggchain proof

An optional proof payload attached to a certificate by a connected chain. Agglayer validates it according to the configured proof mode and commitment version.

Balance tree

A per-network Merkle structure that tracks token balances used by the pessimistic proof. State transitions in certificates update this tree. See Storage.

Bridge exit

A cross-chain transfer intent created on a source network. It is represented in Merkle commitments and consumed during verification and settlement.

Certificate

A chain update submitted to Agglayer. It carries the data needed to advance network state, including exit roots and optional proof material. See Architecture.

Certificate header

Metadata and status for a certificate stored by Agglayer. Headers track lifecycle state and failure information.

Commitment version

The schema version of the hash commitment signed or proven by a chain. Versions evolve over time (for example V2 through V5) to include additional fields and safety checks.

Epoch

A pacing window used by Agglayer to process and settle certificates. Epochs can be time-based or block-based depending on node configuration.

Imported bridge exit

A bridge exit originating from another network, included as an input to the destination network transition.

L1 info root

A root commitment from the L1 view used by Agglayer proof and validation logic. It anchors certificate processing to L1 context.

Local exit root

A Merkle root summarizing exits for a specific network transition. Certificates and proof outputs contain old/new local exit roots.

Network ID

The logical Agglayer identifier for a connected chain. Network ID is a key dimension for storage, rate limits, and lifecycle state.

Nullifier tree

A per-network Merkle structure used to prevent replay or double-consumption of bridge exits. See Storage.

Pessimistic proof

The core zero-knowledge proof used by Agglayer to verify state transitions safely. It is generated and verified through the SP1-based proof pipeline. See Pessimistic Proof.

Pessimistic root

A commitment in the pessimistic-proof state that summarizes balance/nullifier state. Proof outputs include previous and new pessimistic roots.

Proof mode

The verification mode configured for a network, for example legacy ECDSA, multisig, or STARK plus multisig. Mode selection determines which checks run for each certificate.

Settlement

The process of submitting a proven certificate result to Ethereum L1. Settlement finalizes cross-chain state transitions. See Architecture.

SP1 zkVM

The zero-knowledge virtual machine used by the pessimistic-proof program. Agglayer compiles the guest program to an ELF artifact and verifies proofs on host.

Verification key (vkey)

The cryptographic key identifying a specific proof program/circuit. A vkey change is security-sensitive and requires explicit acceptance workflow. See Pessimistic Proof.

Vkey selector

A short selector derived from the active verification key version, used by protocol components to route verification logic.

Witness

Concrete private and public input data fed into the proof program to produce a proof for a specific certificate transition.

Architecture

This chapter maps the Agglayer workspace to functional domains, then describes the certificate lifecycle and settlement path. Use this file as the canonical “which crate owns what” index, including README references to crate/domain ownership.

Crate map

Node entrypoints and runtime wiring

CratePrimary responsibility
agglayerCLI binary and user-facing subcommands (run, config, backups, vkey tools)
agglayer-nodeNode bootstrap and component wiring
agglayer-configTOML configuration schema, parsing, validation, path contextualization
agglayer-clockEpoch pacing (time clock and block clock)
agglayer-telemetryMetrics export and tracing integration
agglayer-utilsShared helpers used across crates

Certificate processing and settlement

CratePrimary responsibility
agglayer-certificate-orchestratorCertificate lifecycle orchestration and task scheduling
agglayer-aggregator-notifierCertifier client and epoch packing pipeline
agglayer-settlement-serviceSettlement request handling and L1 transaction workflow
agglayer-signerSigning abstraction used for settlement/proof flows
agglayer-gcp-kmsGCP KMS-backed key management
agglayer-contractsContract bindings and contract-facing settlement logic

API and transport

CratePrimary responsibility
agglayer-jsonrpc-apiJSON-RPC API surface and handlers
agglayer-rpcInternal RPC service implementation
agglayer-rate-limitingShared rate-limiting middleware
agglayer-grpc-apigRPC API traits and service definitions
agglayer-grpc-servergRPC server implementation
agglayer-grpc-clientgRPC client implementation
agglayer-grpc-typesProtobuf/generated conversion layer

State, types, and testing

CratePrimary responsibility
agglayer-storageRocksDB physical/logical storage layers and migrations
agglayer-typesCore domain types and shared error/status types
agglayer-test-suiteShared test fixtures and test helpers

Pessimistic-proof pipeline

CratePrimary responsibility
pessimistic-proof-coreProof primitives and transition logic
pessimistic-proofHost-side SP1 integration and verification helpers
pessimistic-proof-programSP1 zkVM guest program (no_main)
pessimistic-proof-test-suiteProof-focused integration and compatibility tests

Notes:

  • pessimistic-proof-program is intentionally excluded from the default Cargo workspace build graph because it is cross-compiled for SP1.
  • Several crates expose testutils features for test-only helpers. Prefer those helpers over ad hoc mocks when extending tests.

Dependency tiers

Use this dependency mental model when scoping changes:

  1. Foundations: agglayer-types, agglayer-storage, proof crates.
  2. Domain services: orchestrator, settlement, notifier, signer, contracts.
  3. Transport surfaces: JSON-RPC and gRPC crates.
  4. Runtime composition: agglayer-node and the agglayer binary.

Changes in lower tiers have larger blast radius. For example, edits in agglayer-types are likely to affect all upper tiers.

Certificate lifecycle

Certificates move through a deterministic pipeline.

Client submit (JSON-RPC / gRPC)
  -> Pending persistence
  -> Orchestrator scheduling
  -> Proof/certification execution
  -> Proven header update
  -> Settlement request creation
  -> L1 transaction submission
  -> Settled header/state update

Ownership by phase:

  • Submission and request validation: agglayer-jsonrpc-api, agglayer-rpc, agglayer-grpc-server.
  • Pending/proven/settled state transitions: agglayer-storage and agglayer-certificate-orchestrator.
  • Proof generation and verification: agglayer-aggregator-notifier, pessimistic-proof* crates.
  • L1 settlement: agglayer-settlement-service, agglayer-contracts, agglayer-signer.

Settlement flow

Settlement finalizes proven certificates on Ethereum L1.

  1. The orchestrator marks a certificate as ready for settlement.
  2. The settlement service constructs the contract call payload.
  3. The signer produces transaction signatures (local key or GCP KMS).
  4. Contract adapters submit via Alloy providers.
  5. Storage updates to settled status after confirmation criteria are met.

Safety expectations:

  • Never bypass proof-validation preconditions before settlement.
  • Keep retries idempotent and bounded by configuration.
  • Treat signer and contract changes as security-sensitive, with explicit blast-radius analysis.

Pessimistic Proof

Agglayer’s safety model relies on pessimistic proofs to validate cross-network state transitions. This chapter explains crate responsibilities, operational invariants, and the development workflow.

Architecture

The proof stack is intentionally split across three crates:

  • pessimistic-proof-core contains transition logic, input/output structures, and proof-domain errors.
  • pessimistic-proof-program is the SP1 guest program. It runs inside the zkVM and commits public outputs.
  • pessimistic-proof is the host-side wrapper that embeds the ELF, drives proving/verifying, and maps errors for callers.

At runtime, Agglayer prepares witness inputs from certificate and network state, executes or requests proving, and verifies that outputs match expected commitments before allowing settlement.

Safety invariants

When changing proof-related code, treat these invariants as non-negotiable:

  • Determinism: identical inputs must produce identical public outputs.
  • Commitment compatibility: commitment version semantics (V2 through V5) must remain consistent with verifier expectations.
  • State continuity: previous/new roots in proof outputs must correspond to storage transitions.
  • Verifier identity stability: vkey changes are explicit protocol events, not incidental refactors.

For the full validity-check matrix, including proof and signature combinations, see docs/validity_checks.md.

Development workflow

Use the dedicated make tasks for proof changes.

cargo make pp-elf

This workflow builds the ELF and runs vkey/cycle-tracker checks. For targeted checks:

cargo make pp-check-vkey-change

If a vkey change is intentional, acceptance must be explicit and reviewed:

cargo make pp-accept-vkey-change

Guidelines for safe edits:

  • Prefer minimal, locally justified proof changes.
  • Include proof-focused tests in pessimistic-proof-test-suite when behavior changes.
  • Document any semantic changes to public values or commitments.

Protobuf and gRPC

Agglayer uses protobuf as the schema boundary for gRPC and storage payloads. This chapter defines file ownership, crate responsibilities, and safe evolution workflows.

Proto layout

Schemas live under proto/agglayer/.

  • proto/agglayer/node/ contains public node/service definitions.
  • proto/agglayer/storage/ contains storage-related protobuf schemas.

Generation and compatibility are configured via:

  • buf.yaml
  • buf.rust.gen.yaml
  • buf.storage.gen.yaml

gRPC crate responsibilities

CrateResponsibility
agglayer-grpc-apiService traits and API-facing request/response contracts
agglayer-grpc-typesGenerated types and compatibility conversions
agglayer-grpc-serverTonic server implementation and endpoint wiring
agglayer-grpc-clientTonic client wrappers used by consumers/tests

Standard workflow for schema changes

  1. Edit the .proto source file under proto/agglayer/.

  2. Regenerate artifacts:

    cargo make generate-proto
    
  3. Update server/client behavior in the relevant gRPC crates.

  4. Run verification checks and ensure generated outputs are committed.

Never edit generated code by hand. Generated files are outputs, not the source of truth.

Standard workflow for adding a new gRPC endpoint

  1. Add or extend the service definition in proto/agglayer/node/....
  2. Regenerate protobuf and tonic outputs.
  3. Add conversion logic in agglayer-grpc-types if required.
  4. Implement server behavior in agglayer-grpc-server.
  5. Add/update client calls in agglayer-grpc-client.
  6. Add API and integration tests.

Compatibility rules

  • Do not rename or repurpose existing field numbers.
  • Prefer adding new optional fields over changing existing meaning.
  • Use reserved fields/messages when removing obsolete numbers or names.
  • Keep wire compatibility in mind for rolling deployments.

Storage

Agglayer storage is implemented on RocksDB with a strict separation between physical database mechanics and logical domain stores.

Database topology

Storage configuration exposes multiple paths, either inferred from a common db-path or configured explicitly. At runtime, the node currently opens pending/state/epochs databases, and optionally debug storage in debug mode.

DatabaseDefault subpathPrimary purpose
Pending DBpending/Pending queue and proof material
State DBstate/Canonical per-network state
Epochs DB rootepochs/Root directory for per-epoch RocksDB instances
Debug DBdebug/Debug-only certificate traces (opened only in debug mode)

See crates/agglayer-config/src/storage.rs for configuration details.

Note:

  • metadata_db_path exists in config today, but node startup currently does not open a dedicated metadata RocksDB. Metadata is stored via the metadata_cf column family in the state DB.

Physical vs logical layers

  • Physical layer (crates/agglayer-storage/src/storage/): typed column-family access, serialization codecs, batched writes, iterators, and RocksDB open/migration mechanics.
  • Logical layer (crates/agglayer-storage/src/stores/): domain stores with business-oriented APIs (StateStore, PendingStore, EpochStore, DebugStore).

Keep domain policy in logical stores. Keep encoding and persistence mechanics in the physical layer.

Column families by store

State DB (stores/state/cf_definitions.rs):

  • certificate_header_cf
  • certificate_per_network_cf
  • latest_settled_certificate_per_network_cf
  • metadata_cf
  • local_exit_tree_per_network_cf
  • balance_tree_per_network_cf
  • nullifier_tree_per_network_cf
  • network_info_cf
  • disabled_networks_cf

Pending DB (stores/pending/cf_definitions.rs):

  • latest_proven_certificate_per_network_cf
  • latest_pending_certificate_per_network_cf
  • pending_queue_cf
  • proof_per_certificate_cf

Per-epoch DB (stores/per_epoch/cf_definitions.rs):

  • per_epoch_certificates_cf
  • per_epoch_metadata_cf
  • per_epoch_proofs_cf
  • per_epoch_start_checkpoint_cf
  • per_epoch_end_checkpoint_cf

Debug DB (stores/debug/cf_definitions.rs):

  • debug_certificates

Migration bookkeeping also uses a dedicated migration column family.

Migrations, backups, and safety

  • Migration logic lives under crates/agglayer-storage/src/storage/migration/ and includes checks for unexpected/default column-family content.
  • Storage protobuf schemas under proto/agglayer/storage/v0/ define compatibility boundaries for stored structures.
  • Backups are managed via storage backup configuration and CLI backup commands.

When changing storage schemas or keys:

  1. Define the migration path up front.
  2. Keep reads backward-compatible where possible.
  3. Add tests covering upgrade and rollback behavior.

AI agent configuration

This project uses AI coding agents (Claude Code and others) with shared configuration checked into the repository.

Design decisions

.agents/skills/ as source of truth. This keeps the configuration tool-agnostic. Claude Code discovers them via the .claude/skills symlink.

Use docs/knowledge-base/ for domain and architecture knowledge. Knowledge that should be read by both humans and agents belongs in mdbook chapters under docs/knowledge-base/src/. Skills should focus on workflows and decision procedures.

Prefer .agents/skills/ over .claude/rules/. Use .agents/skills/ for most conventions. .claude/rules/ may be used for Claude-specific behavior that doesn’t fit the skill model (e.g., sub-agent coordination).

Skills over AGENTS.md for task-specific workflows. AGENTS.md contains only always-on behavioral rules and a documentation index. Task-specific workflows (committing, PR creation, verification) are skills that load on demand, reducing context consumption.

Use a scripted blast-radius detector for scope decisions. scripts/blast_radius.py is the canonical detector for changed-file impact. Verification workflows should consume its output instead of re-deriving scope logic ad hoc.

Skill prefixes

Skill folders are prefixed by category to keep them organized:

  • workflow-: step-by-step actions with side effects; usually manual-only (disable-model-invocation: true).
  • domain-: agglayer-specific invariants and safety behavior.
  • analysis-: investigation and reasoning tasks (debugging, etc.); usually no side effects.
  • tech-: stack / tools playbooks, not domain-specific.
  • style-: writing and formatting conventions.
  • meta-: agent governance and maintenance workflows.
  • docs-: knowledge-base maintenance workflows.

Adding a new skill

  1. Pick the appropriate prefix from the list above.
  2. Create .agents/skills/<prefix><name>/SKILL.md with YAML frontmatter and instructions.
  3. Use disable-model-invocation: true for manual-only workflows (e.g., /commit).
  4. Use user-invocable: false for background conventions that Claude should apply automatically but users shouldn’t invoke directly.

End-of-session retrospective

Run /session-retro at the end of a session to review the conversation and propose improvements to skills, documentation, or AGENTS.md.

Documentation publishing

Documentation is built and published by .github/workflows/doc.yml on every PR and every push to main.

The pipeline publishes two outputs together:

  • Knowledge base (mdbook): docs/knowledge-base/.
  • Rust API docs (rustdoc): cargo doc --no-deps --all --all-features.

The deployed site uses this layout:

  • / -> knowledge-base landing page.
  • /rustdoc/agglayer/ -> Rust API docs.
TriggerURL
Push to mainGitHub Pages (https://agglayer.github.io/agglayer/)
Pull requestCloudflare Workers preview (https://<repo>-pr-<PR_NUMBER>-rust-docs.agglayer.dev)

For PR previews, the workflow posts the deployed URL as a PR comment automatically.

Merge-queue behavior

The project uses GitHub merge queue. Two events matter:

  • merge_group: pre-merge validation builds docs but does not deploy.
  • push to main: deployment to GitHub Pages after merge queue completion.

The deploy-gh-pages job guards against fork deployment with !github.event.repository.fork.

Contributor expectations

  • For knowledge-base changes, ensure mdbook build docs/knowledge-base/ succeeds locally.
  • For API changes, keep /// Rust docs accurate and complete for public and pub(crate) items.
  • When linking between rustdoc items, avoid links known to break for pub(crate) cross-item references.

Agglayer crates are not published to crates.io, so docs.rs is not the canonical documentation surface. GitHub Pages is the canonical published location.