Introduction
The Agglayer Knowledge Base is the human-first reference for architecture, domain concepts, and contributor workflows in this repository. It is intended to be useful for both maintainers and AI coding agents, with clarity and operational correctness as the primary goals.
What this book covers
- Core terms and protocol language used across Agglayer.
- Crate and domain ownership, including certificate lifecycle and settlement flow.
- Safety-sensitive subsystems, such as pessimistic proof, protobuf boundaries, and storage layout.
- Contributor workflows for AI-agent configuration and docs publishing.
How to use it
- Start with Glossary and Architecture to build shared context.
- Use domain chapters for deep dives: Pessimistic Proof, Protobuf and gRPC, and Storage.
- Use workflow chapters for contributor operations: AI Agent Configuration and Documentation Publishing.
Editorial expectations
- Keep content factual, concise, and linked to concrete code paths where relevant.
- Prefer explicit safety invariants, failure modes, and verification guidance over vague descriptions.
- Update Summary when adding or restructuring chapters.
Glossary
This glossary defines the core terms used across Agglayer. Use it together with Architecture, Pessimistic Proof, Protobuf and gRPC, and Storage.
Aggchain proof
An optional proof payload attached to a certificate by a connected chain. Agglayer validates it according to the configured proof mode and commitment version.
Balance tree
A per-network Merkle structure that tracks token balances used by the pessimistic proof. State transitions in certificates update this tree. See Storage.
Bridge exit
A cross-chain transfer intent created on a source network. It is represented in Merkle commitments and consumed during verification and settlement.
Certificate
A chain update submitted to Agglayer. It carries the data needed to advance network state, including exit roots and optional proof material. See Architecture.
Certificate header
Metadata and status for a certificate stored by Agglayer. Headers track lifecycle state and failure information.
Commitment version
The schema version of the hash commitment signed or proven by a chain. Versions evolve over time (for example V2 through V5) to include additional fields and safety checks.
Epoch
A pacing window used by Agglayer to process and settle certificates. Epochs can be time-based or block-based depending on node configuration.
Imported bridge exit
A bridge exit originating from another network, included as an input to the destination network transition.
L1 info root
A root commitment from the L1 view used by Agglayer proof and validation logic. It anchors certificate processing to L1 context.
Local exit root
A Merkle root summarizing exits for a specific network transition. Certificates and proof outputs contain old/new local exit roots.
Network ID
The logical Agglayer identifier for a connected chain. Network ID is a key dimension for storage, rate limits, and lifecycle state.
Nullifier tree
A per-network Merkle structure used to prevent replay or double-consumption of bridge exits. See Storage.
Pessimistic proof
The core zero-knowledge proof used by Agglayer to verify state transitions safely. It is generated and verified through the SP1-based proof pipeline. See Pessimistic Proof.
Pessimistic root
A commitment in the pessimistic-proof state that summarizes balance/nullifier state. Proof outputs include previous and new pessimistic roots.
Proof mode
The verification mode configured for a network, for example legacy ECDSA, multisig, or STARK plus multisig. Mode selection determines which checks run for each certificate.
Settlement
The process of submitting a proven certificate result to Ethereum L1. Settlement finalizes cross-chain state transitions. See Architecture.
SP1 zkVM
The zero-knowledge virtual machine used by the pessimistic-proof program. Agglayer compiles the guest program to an ELF artifact and verifies proofs on host.
Verification key (vkey)
The cryptographic key identifying a specific proof program/circuit. A vkey change is security-sensitive and requires explicit acceptance workflow. See Pessimistic Proof.
Vkey selector
A short selector derived from the active verification key version, used by protocol components to route verification logic.
Witness
Concrete private and public input data fed into the proof program to produce a proof for a specific certificate transition.
Architecture
This chapter maps the Agglayer workspace to functional domains, then describes the certificate lifecycle and settlement path. Use this file as the canonical “which crate owns what” index, including README references to crate/domain ownership.
Crate map
Node entrypoints and runtime wiring
| Crate | Primary responsibility |
|---|---|
agglayer | CLI binary and user-facing subcommands (run, config, backups, vkey tools) |
agglayer-node | Node bootstrap and component wiring |
agglayer-config | TOML configuration schema, parsing, validation, path contextualization |
agglayer-clock | Epoch pacing (time clock and block clock) |
agglayer-telemetry | Metrics export and tracing integration |
agglayer-utils | Shared helpers used across crates |
Certificate processing and settlement
| Crate | Primary responsibility |
|---|---|
agglayer-certificate-orchestrator | Certificate lifecycle orchestration and task scheduling |
agglayer-aggregator-notifier | Certifier client and epoch packing pipeline |
agglayer-settlement-service | Settlement request handling and L1 transaction workflow |
agglayer-signer | Signing abstraction used for settlement/proof flows |
agglayer-gcp-kms | GCP KMS-backed key management |
agglayer-contracts | Contract bindings and contract-facing settlement logic |
API and transport
| Crate | Primary responsibility |
|---|---|
agglayer-jsonrpc-api | JSON-RPC API surface and handlers |
agglayer-rpc | Internal RPC service implementation |
agglayer-rate-limiting | Shared rate-limiting middleware |
agglayer-grpc-api | gRPC API traits and service definitions |
agglayer-grpc-server | gRPC server implementation |
agglayer-grpc-client | gRPC client implementation |
agglayer-grpc-types | Protobuf/generated conversion layer |
State, types, and testing
| Crate | Primary responsibility |
|---|---|
agglayer-storage | RocksDB physical/logical storage layers and migrations |
agglayer-types | Core domain types and shared error/status types |
agglayer-test-suite | Shared test fixtures and test helpers |
Pessimistic-proof pipeline
| Crate | Primary responsibility |
|---|---|
pessimistic-proof-core | Proof primitives and transition logic |
pessimistic-proof | Host-side SP1 integration and verification helpers |
pessimistic-proof-program | SP1 zkVM guest program (no_main) |
pessimistic-proof-test-suite | Proof-focused integration and compatibility tests |
Notes:
pessimistic-proof-programis intentionally excluded from the default Cargo workspace build graph because it is cross-compiled for SP1.- Several crates expose
testutilsfeatures for test-only helpers. Prefer those helpers over ad hoc mocks when extending tests.
Dependency tiers
Use this dependency mental model when scoping changes:
- Foundations:
agglayer-types,agglayer-storage, proof crates. - Domain services: orchestrator, settlement, notifier, signer, contracts.
- Transport surfaces: JSON-RPC and gRPC crates.
- Runtime composition:
agglayer-nodeand theagglayerbinary.
Changes in lower tiers have larger blast radius.
For example, edits in agglayer-types are likely to affect all upper tiers.
Certificate lifecycle
Certificates move through a deterministic pipeline.
Client submit (JSON-RPC / gRPC)
-> Pending persistence
-> Orchestrator scheduling
-> Proof/certification execution
-> Proven header update
-> Settlement request creation
-> L1 transaction submission
-> Settled header/state update
Ownership by phase:
- Submission and request validation:
agglayer-jsonrpc-api,agglayer-rpc,agglayer-grpc-server. - Pending/proven/settled state transitions:
agglayer-storageandagglayer-certificate-orchestrator. - Proof generation and verification:
agglayer-aggregator-notifier,pessimistic-proof*crates. - L1 settlement:
agglayer-settlement-service,agglayer-contracts,agglayer-signer.
Settlement flow
Settlement finalizes proven certificates on Ethereum L1.
- The orchestrator marks a certificate as ready for settlement.
- The settlement service constructs the contract call payload.
- The signer produces transaction signatures (local key or GCP KMS).
- Contract adapters submit via Alloy providers.
- Storage updates to settled status after confirmation criteria are met.
Safety expectations:
- Never bypass proof-validation preconditions before settlement.
- Keep retries idempotent and bounded by configuration.
- Treat signer and contract changes as security-sensitive, with explicit blast-radius analysis.
Pessimistic Proof
Agglayer’s safety model relies on pessimistic proofs to validate cross-network state transitions. This chapter explains crate responsibilities, operational invariants, and the development workflow.
Architecture
The proof stack is intentionally split across three crates:
pessimistic-proof-corecontains transition logic, input/output structures, and proof-domain errors.pessimistic-proof-programis the SP1 guest program. It runs inside the zkVM and commits public outputs.pessimistic-proofis the host-side wrapper that embeds the ELF, drives proving/verifying, and maps errors for callers.
At runtime, Agglayer prepares witness inputs from certificate and network state, executes or requests proving, and verifies that outputs match expected commitments before allowing settlement.
Safety invariants
When changing proof-related code, treat these invariants as non-negotiable:
- Determinism: identical inputs must produce identical public outputs.
- Commitment compatibility: commitment version semantics (V2 through V5) must remain consistent with verifier expectations.
- State continuity: previous/new roots in proof outputs must correspond to storage transitions.
- Verifier identity stability: vkey changes are explicit protocol events, not incidental refactors.
For the full validity-check matrix,
including proof and signature combinations,
see docs/validity_checks.md.
Development workflow
Use the dedicated make tasks for proof changes.
cargo make pp-elf
This workflow builds the ELF and runs vkey/cycle-tracker checks. For targeted checks:
cargo make pp-check-vkey-change
If a vkey change is intentional, acceptance must be explicit and reviewed:
cargo make pp-accept-vkey-change
Guidelines for safe edits:
- Prefer minimal, locally justified proof changes.
- Include proof-focused tests in
pessimistic-proof-test-suitewhen behavior changes. - Document any semantic changes to public values or commitments.
Protobuf and gRPC
Agglayer uses protobuf as the schema boundary for gRPC and storage payloads. This chapter defines file ownership, crate responsibilities, and safe evolution workflows.
Proto layout
Schemas live under proto/agglayer/.
proto/agglayer/node/contains public node/service definitions.proto/agglayer/storage/contains storage-related protobuf schemas.
Generation and compatibility are configured via:
buf.yamlbuf.rust.gen.yamlbuf.storage.gen.yaml
gRPC crate responsibilities
| Crate | Responsibility |
|---|---|
agglayer-grpc-api | Service traits and API-facing request/response contracts |
agglayer-grpc-types | Generated types and compatibility conversions |
agglayer-grpc-server | Tonic server implementation and endpoint wiring |
agglayer-grpc-client | Tonic client wrappers used by consumers/tests |
Standard workflow for schema changes
-
Edit the
.protosource file underproto/agglayer/. -
Regenerate artifacts:
cargo make generate-proto -
Update server/client behavior in the relevant gRPC crates.
-
Run verification checks and ensure generated outputs are committed.
Never edit generated code by hand. Generated files are outputs, not the source of truth.
Standard workflow for adding a new gRPC endpoint
- Add or extend the service definition in
proto/agglayer/node/.... - Regenerate protobuf and tonic outputs.
- Add conversion logic in
agglayer-grpc-typesif required. - Implement server behavior in
agglayer-grpc-server. - Add/update client calls in
agglayer-grpc-client. - Add API and integration tests.
Compatibility rules
- Do not rename or repurpose existing field numbers.
- Prefer adding new optional fields over changing existing meaning.
- Use
reservedfields/messages when removing obsolete numbers or names. - Keep wire compatibility in mind for rolling deployments.
Storage
Agglayer storage is implemented on RocksDB with a strict separation between physical database mechanics and logical domain stores.
Database topology
Storage configuration exposes multiple paths,
either inferred from a common db-path or configured explicitly.
At runtime,
the node currently opens pending/state/epochs databases,
and optionally debug storage in debug mode.
| Database | Default subpath | Primary purpose |
|---|---|---|
| Pending DB | pending/ | Pending queue and proof material |
| State DB | state/ | Canonical per-network state |
| Epochs DB root | epochs/ | Root directory for per-epoch RocksDB instances |
| Debug DB | debug/ | Debug-only certificate traces (opened only in debug mode) |
See crates/agglayer-config/src/storage.rs for configuration details.
Note:
metadata_db_pathexists in config today, but node startup currently does not open a dedicated metadata RocksDB. Metadata is stored via themetadata_cfcolumn family in the state DB.
Physical vs logical layers
- Physical layer (
crates/agglayer-storage/src/storage/): typed column-family access, serialization codecs, batched writes, iterators, and RocksDB open/migration mechanics. - Logical layer (
crates/agglayer-storage/src/stores/): domain stores with business-oriented APIs (StateStore,PendingStore,EpochStore,DebugStore).
Keep domain policy in logical stores. Keep encoding and persistence mechanics in the physical layer.
Column families by store
State DB (stores/state/cf_definitions.rs):
certificate_header_cfcertificate_per_network_cflatest_settled_certificate_per_network_cfmetadata_cflocal_exit_tree_per_network_cfbalance_tree_per_network_cfnullifier_tree_per_network_cfnetwork_info_cfdisabled_networks_cf
Pending DB (stores/pending/cf_definitions.rs):
latest_proven_certificate_per_network_cflatest_pending_certificate_per_network_cfpending_queue_cfproof_per_certificate_cf
Per-epoch DB (stores/per_epoch/cf_definitions.rs):
per_epoch_certificates_cfper_epoch_metadata_cfper_epoch_proofs_cfper_epoch_start_checkpoint_cfper_epoch_end_checkpoint_cf
Debug DB (stores/debug/cf_definitions.rs):
debug_certificates
Migration bookkeeping also uses a dedicated migration column family.
Migrations, backups, and safety
- Migration logic lives under
crates/agglayer-storage/src/storage/migration/and includes checks for unexpected/default column-family content. - Storage protobuf schemas under
proto/agglayer/storage/v0/define compatibility boundaries for stored structures. - Backups are managed via storage backup configuration and CLI backup commands.
When changing storage schemas or keys:
- Define the migration path up front.
- Keep reads backward-compatible where possible.
- Add tests covering upgrade and rollback behavior.
AI agent configuration
This project uses AI coding agents (Claude Code and others) with shared configuration checked into the repository.
Design decisions
.agents/skills/ as source of truth.
This keeps the configuration tool-agnostic.
Claude Code discovers them via the .claude/skills symlink.
Use docs/knowledge-base/ for domain and architecture knowledge.
Knowledge that should be read by both humans and agents belongs in mdbook
chapters under docs/knowledge-base/src/.
Skills should focus on workflows and decision procedures.
Prefer .agents/skills/ over .claude/rules/.
Use .agents/skills/ for most conventions.
.claude/rules/ may be used for Claude-specific behavior
that doesn’t fit the skill model (e.g., sub-agent coordination).
Skills over AGENTS.md for task-specific workflows.
AGENTS.md contains only always-on behavioral rules and a documentation index.
Task-specific workflows (committing, PR creation, verification) are skills
that load on demand, reducing context consumption.
Use a scripted blast-radius detector for scope decisions.
scripts/blast_radius.py is the canonical detector for changed-file impact.
Verification workflows should consume its output
instead of re-deriving scope logic ad hoc.
Skill prefixes
Skill folders are prefixed by category to keep them organized:
workflow-: step-by-step actions with side effects; usually manual-only (disable-model-invocation: true).domain-: agglayer-specific invariants and safety behavior.analysis-: investigation and reasoning tasks (debugging, etc.); usually no side effects.tech-: stack / tools playbooks, not domain-specific.style-: writing and formatting conventions.meta-: agent governance and maintenance workflows.docs-: knowledge-base maintenance workflows.
Adding a new skill
- Pick the appropriate prefix from the list above.
- Create
.agents/skills/<prefix><name>/SKILL.mdwith YAML frontmatter and instructions. - Use
disable-model-invocation: truefor manual-only workflows (e.g.,/commit). - Use
user-invocable: falsefor background conventions that Claude should apply automatically but users shouldn’t invoke directly.
End-of-session retrospective
Run /session-retro at the end of a session to review the conversation
and propose improvements to skills, documentation, or AGENTS.md.
Documentation publishing
Documentation is built and published by .github/workflows/doc.yml
on every PR and every push to main.
The pipeline publishes two outputs together:
- Knowledge base (mdbook):
docs/knowledge-base/. - Rust API docs (rustdoc):
cargo doc --no-deps --all --all-features.
The deployed site uses this layout:
/-> knowledge-base landing page./rustdoc/agglayer/-> Rust API docs.
| Trigger | URL |
|---|---|
Push to main | GitHub Pages (https://agglayer.github.io/agglayer/) |
| Pull request | Cloudflare Workers preview (https://<repo>-pr-<PR_NUMBER>-rust-docs.agglayer.dev) |
For PR previews, the workflow posts the deployed URL as a PR comment automatically.
Merge-queue behavior
The project uses GitHub merge queue. Two events matter:
merge_group: pre-merge validation builds docs but does not deploy.pushtomain: deployment to GitHub Pages after merge queue completion.
The deploy-gh-pages job guards against fork deployment
with !github.event.repository.fork.
Contributor expectations
- For knowledge-base changes,
ensure
mdbook build docs/knowledge-base/succeeds locally. - For API changes,
keep
///Rust docs accurate and complete for public andpub(crate)items. - When linking between rustdoc items,
avoid links known to break for
pub(crate)cross-item references.
Agglayer crates are not published to crates.io, so docs.rs is not the canonical documentation surface. GitHub Pages is the canonical published location.