Anchor Book
Documentation for Anchor users and developers.
The Anchor client is an SSV client written in rust. Anchor has been built from the ground up to be highly performant and secure.
This book aims to provide help and support to users and developers of this client.
Note: The Anchor client is currently under active development and should not be used in a production setting.
About this Book
This book is open source, contribute at github.com/sigp/anchor/book.
The Anchor CI system maintains a hosted version of the unstable
branch
at anchor-book.sigmaprime.io.
Anchor Installation Guide
This guide provides step-by-step instructions for installing Anchor.
1. Download the Latest Release from GitHub
- Visit the Anchor Releases page.
- Download the appropriate binary for your operating system.
- Extract the file if necessary and move the binary to a location in your
PATH
(e.g.,/usr/local/bin/
).
Example
wget https://github.com/sigp/anchor/releases/download/<version>/anchor-<platform>.tar.gz
# Replace <version> and <platform> with the appropriate values.
tar -xvf anchor-<platform>.tar.gz
sudo mv anchor /usr/local/bin/
Verify the installation:
anchor --version
2. Run Anchor Using Docker
-
Pull the latest Anchor Docker image:
docker pull sigp/anchor:latest
-
Run the Anchor container:
docker run --rm -it sigp/anchor:latest --help
3. Clone and Build Locally
-
Clone the Anchor repository:
git clone https://github.com/sigp/anchor.git cd anchor
-
Build the Anchor binary:
cargo build --release
The binary will be located in
./target/release/
. -
Move the binary to a location in your
PATH
:sudo mv target/release/anchor /usr/local/bin/
Verify the installation:
anchor --version
Metrics
Anchor comes pre-built with a suite of metrics for developers or users to monitor the health and performance of their node.
They must be enabled at runtime using the --metrics
CLI flag.
Usage
In order to run a metrics server, docker
is required to be installed.
Once docker is installed, a metrics server can be run locally via the following steps:
- Start an anchor node with
$ anchor --metrics
- The
--metrics
flag is required for metrics.
- The
- Move into the metrics directory
$ cd metrics
. - Bring the environment up with
$ docker-compose up --build -d
. - Ensure that Prometheus can access your Anchor node by ensuring it is in
the
UP
state at http://localhost:9090/targets. - Browse to http://localhost:3000
- Username:
admin
- Password:
changeme
- Username:
- Import some dashboards from the
metrics/dashboards
directory in this repo:- In the Grafana UI, go to
Dashboards
->Manage
->Import
->Upload .json file
. - The
Summary.json
dashboard is a good place to start.
- In the Grafana UI, go to
Dashboards
A suite of dashboards can be found in metrics/dashboard
directory. The Anchor team will
frequently update these dashboards as new metrics are introduced.
We welcome Pull Requests for any users wishing to add their dashboards to this repository for others to share.
Scrape Targets
Prometheus periodically reads the metrics/scrape-targets/scrape-targets.json
file. This
file tells Prometheus which endpoints to collect data from. The current file is setup to read
from Anchor on its default metrics port. You can add additional endpoints if you want to collect
metrics from other servers.
An example is Lighthouse. You can collect metrics from Anchor and Lighthouse simultaneously if
they are both running. We have an example file scrape-targets-lighthouse.json
which allows this.
You can replace the scrape-targets.json
file with the contents of
scrape-targets-lighthouse.json
if you wish to collect metrics from Anchor and Lighthouse
simultaneously.
Hosting Publicly
By default Prometheus and Grafana will only bind to localhost (127.0.0.1), in
order to protect you from accidentally exposing them to the public internet. If
you would like to change this you must edit the http_addr
in metrics/grafana/grafana.ini
.
Frequently Asked Questions
What is sigp/anchor
The rust implementation of the Secret Shared Validator (SSV) protocol.
Development Environment
Most Anchor developers work on Linux or MacOS, however Windows should still be suitable.
First, follow the Installation Guide
to install
Anchor. This will install Anchor to your PATH
, which is not
particularly useful for development but still a good way to ensure you have the
base dependencies.
The additional requirements for developers are:
docker
. Some tests need docker installed and running.
Using make
Commands to run the test suite are available via the Makefile
in the
project root for the benefit of CI/CD. We list some of these commands below so
you can run them locally and avoid CI failures:
$ make cargo-fmt
: (fast) runs a Rust code formatting check.$ make lint
: (fast) runs a Rust code linter.$ make test
: (medium) runs unit tests across the whole project.$ make test-specs
: (medium) runs the Anchor test vectors.$ make test-full
: (slow) runs the full test suite (including all previous commands). This is approximately everything that is required to pass CI.
Testing
As with most other Rust projects, Anchor uses cargo test
for unit and
integration tests. For example, to test the qbft
crate run:
cd src/qbft
cargo test
Local Testnets
During development and testing it can be useful to start a small, local testnet.
Testnet scripts will be built as the project develops.
Contributing to Anchor
Anchor welcomes contributions. If you are interested in contributing to to this project, and you want to learn Rust, feel free to join us building this project.
To start contributing,
- Read our how to contribute document.
- Setup a development environment.
- Browse through the open issues (tip: look for the good first issue tag).
- Comment on an issue before starting work.
- Share your work via a pull-request.
Branches
Anchor maintains two permanent branches:
stable
: Always points to the latest stable release.- This is ideal for most users.
unstable
: Used for development, contains the latest PRs.- Developers should base their PRs on this branch.
Rust
We adhere to Rust code conventions as outlined in the Rust Styleguide.
Please use clippy and rustfmt to detect common mistakes and inconsistent code formatting:
cargo clippy --all
cargo fmt --all --check
Panics
Generally, panics should be avoided at all costs. Anchor operates in an adversarial environment (the Internet) and it's a severe vulnerability if people on the Internet can cause Anchor to crash via a panic.
Always prefer returning a Result
or Option
over causing a panic. For
example, prefer array.get(1)?
over array[1]
.
If you know there won't be a panic but can't express that to the compiler,
use .expect("Helpful message")
instead of .unwrap()
. Always provide
detailed reasoning in a nearby comment when making assumptions about panics.
TODOs
All TODO
statements should be accompanied by a GitHub issue.
#![allow(unused)] fn main() { pub fn my_function(&mut self, _something &[u8]) -> Result<String, Error> { // TODO: something_here // https://github.com/sigp/anchor/issues/XX } }
Comments
General Comments
- Prefer line (
//
) comments to block comments (/* ... */
) - Comments can appear on the line prior to the item or after a trailing space.
#![allow(unused)] fn main() { // Comment for this struct struct Anchor {} fn validate_attestation() {} // A comment on the same line after a space }
Doc Comments
- The
///
is used to generate comments for Docs. - The comments should come before attributes.
#![allow(unused)] fn main() { /// Stores the core configuration for this instance. /// This struct is general, other components may implement more /// specialized config structs. #[derive(Clone)] pub struct Config { pub data_dir: PathBuf, pub p2p_listen_port: u16, } }
Rust Resources
Rust is an extremely powerful, low-level programming language that provides freedom and performance to create powerful projects. The Rust Book provides insight into the Rust language and some of the coding style to follow (As well as acting as a great introduction and tutorial for the language).
Rust has a steep learning curve, but there are many resources to help. We suggest:
- Rust Book
- Rust by example
- Learning Rust With Entirely Too Many Linked Lists
- Rustlings
- Rust Exercism
- Learn X in Y minutes - Rust
For Protocol Developers
Documentation for protocol developers.
This section lists Anchor-specific decisions that are not strictly spec'd and may be useful for other protocol developers wishing to interact with Anchor.
SSV NodeInfo Handshake Protocol
The protocol is used by SSV-based nodes to exchange basic node metadata and validate each other's identity when establishing a connection over Libp2p under a dedicated protocol ID. The spec is define here in the SSV NodeInfo Handshake Protocol.
SSV NodeInfo Handshake Protocol Specification
This document specifies the SSV NodeInfo Handshake Protocol. The protocol is used by SSV-based nodes to exchange basic node metadata and validate each other's identity when establishing a connection over Libp2p under a dedicated protocol ID.
Table of Contents
- 1. Introduction
- 2. Definitions
- 3. Protocol Constants
- 4. Data Structures
- 5. Serialization and Signing
- 6. Handshake Protocol Flows
- 7. Security Considerations
- 8. Rationale and Notes
- 9. Examples
1. Introduction
The SSV NodeInfo Handshake Protocol defines how two SSV nodes exchange, sign, and verify each other's NodeInfo, which includes a network_id
(such as "holesky", "prater", etc.) and optional metadata about node software versions or subnets. The protocol uses a request-response style handshake over Libp2p under a dedicated protocol ID.
The high-level handshake steps are:
- Requester sends an Envelope (containing its NodeInfo) to the peer.
- Responder verifies this Envelope, checks the
network_id
, and replies with its own Envelope. - Requester verifies the responder's Envelope.
- Both sides proceed if verification succeeds; otherwise, the handshake is considered failed.
2. Definitions
2.1 Terminology
Term | Definition |
---|---|
Envelope | A Protobuf-encoded message containing a public_key , payload_type , payload , and signature (covering a domain-separated concatenation of fields). |
NodeInfo | A JSON-based structure holding key node attributes like network_id plus optional metadata. |
Handshake | The request-response exchange of Envelopes between two nodes at connection time. |
2.2 Domain Separation
- Domain:
"ssv"
.
Used to separate signatures for different contexts or protocols.
3. Protocol Constants
Name | Value | Description |
---|---|---|
DOMAIN | ssv | Fixed ASCII text used during signature generation. |
PAYLOAD_TYPE | ssv/nodeinfo | Identifies the payload as an SSV NodeInfo structure. |
PROTOCOL_ID | /ssv/info/0.0.1 | Libp2p protocol ID used for the handshake. |
4. Data Structures
4.1 Envelope
The Envelope is a Protobuf message:
message Envelope {
bytes public_key = 1;
bytes payload_type = 2;
bytes payload = 3;
bytes signature = 5;
}
4.2 NodeInfo
NodeInfo:
- network_id: String
- metadata: NodeMetadata (optional)
4.3 NodeMetadata
NodeMetadata:
- node_version: String
- execution_node: String
- consensus_node: String
- subnets: String
5. Serialization and Signing
5.1 Envelope Fields
-
public_key
- Sender’s public key in serialized form (e.g., compressed Secp256k1 or raw Ed25519 bytes).
- The public key is encoded and decoded using Protobuf.
- For reference, Libp2p has a Peer Ids and Keys, which may be consulted for consistent handling across implementations.
-
payload_type
- MUST be
"ssv/nodeinfo"
in this protocol. - Used to identify how to interpret
payload
.
- MUST be
-
payload
- Contains
NodeInfo
data in JSON (described below).
- Contains
-
signature
- A cryptographic signature covering
DOMAIN || payload_type || payload
.
- A cryptographic signature covering
5.2 NodeInfo JSON Layout
Internally, the protocol uses a “legacy” layout for NodeInfo
serialization, with a top-level JSON structure:
{
"Entries": [
"", // (Index 0) Old forkVersion, not used
"<network_id>", // (Index 1) The NodeInfo.network_id
"<json-encoded metadata>" // (Index 2) if NodeMetadata is present
]
}
- If the array has fewer than 2 entries, the payload is invalid.
- If the array has 3 entries, the 3rd entry is a JSON object for metadata, for example:
{
"NodeVersion": "...",
"ExecutionNode": "...",
"ConsensusNode": "...",
"Subnets": "..."
}
5.3 Signature Preparation
To sign an Envelope, implementations:
-
Construct the unsigned message:
unsigned_message = DOMAIN || payload_type || payload
-
Sign
unsigned_message
using the node’s private key. -
Write the resulting signature to
signature
.
To verify an Envelope:
- Recompute the
unsigned_message
. - Verify using
public_key
againstsignature
.
If verification fails, the handshake MUST abort.
6. Handshake Protocol Flows
6.1 Protocol ID
Both peers must speak the protocol identified by:
/ssv/info/0.0.1
6.2 Request Phase
-
Build Envelope
- The initiating node (Requester) serializes its
NodeInfo
into JSON (thepayload
). - Sets
payload_type = "ssv/nodeinfo"
. - Prepends
DOMAIN = "ssv"
when computing the signature. - Places the resulting
public_key
andsignature
into the Envelope.
- The initiating node (Requester) serializes its
-
Send Request
- The requester sends this Envelope as the request.
-
Wait for Response
- The requester awaits the single response from the Responder.
6.3 Response Phase
-
Receive & Verify
- The responder verifies the incoming Envelope:
- Check signature correctness.
- Extract
NodeInfo
. - Validate
network_id
if necessary (see 6.4).
- The responder verifies the incoming Envelope:
-
Build Response
- If valid, the responder builds and signs its own Envelope containing its
NodeInfo
.
- If valid, the responder builds and signs its own Envelope containing its
-
Send Response
- The responder sends the Envelope back to the requester.
-
Requester Verifies
- The requester verifies the signature, parses
NodeInfo
, and checksnetwork_id
.
- The requester verifies the signature, parses
6.4 Network Mismatch Checks
- Implementations MUST check whether the received
NodeInfo
’snetwork_id
matches their localnetwork_id
. - If they mismatch, the implementation SHOULD reject the connection.
7. Security Considerations
- Signature Validation is mandatory. Any failure to verify the Envelope’s signature indicates an invalid handshake.
- Public Key Authenticity: The Envelope’s
public_key
is not implicitly trusted. It must match the verified signature. - Network Mismatch: Avoid bridging distinct SSV or Ethereum networks. Peers claiming the wrong
network_id
should be rejected. - Payload Size: Although
NodeInfo
is generally small, implementations SHOULD impose a maximum bound for payload. Any request or response exceeding this size limit SHOULD be rejected.
8. Rationale and Notes
- Using a Protobuf-based Envelope simplifies cross-language interoperability.
- The domain separation string (
"ssv"
) prevents signature reuse in other contexts. - The “legacy”
Entries
layout ensures backward-compatibility with older SSV implementations.
9. Examples
9.1 Example Envelope in Hex
An example Envelope could be hex-encoded as:
0a250802122102ba6a707dcec6c60ba2793d52123d34b22556964fc798d4aa88ffc41a00e42407120c7373762f6e6f6465696e666f1aa5017b22456e7472696573223a5b22222c22686f6c65736b79222c227b5c224e6f646556657273696f6e5c223a5c22676574682f785c222c5c22457865637574696f6e4e6f64655c223a5c22676574682f785c222c5c22436f6e73656e7375734e6f64655c223a5c22707279736d2f785c222c5c225375626e6574735c223a5c2230303030303030303030303030303030303030303030303030303030303030303030305c227d225d7d2a473045022100b8a2a668113330369e74b86ec818a87009e2a351f7ee4c0e431e1f659dd1bc3f02202b1ebf418efa7fb0541f77703bea8563234a1b70b8391d43daa40b6e7c3fcc84
Decoding reveals (high-level view):
Envelope {
public_key = <raw bytes>,
payload_type = "ssv/nodeinfo",
payload = {
"Entries": [
"",
"holesky",
"{\"NodeVersion\":\"geth/x\",\"ExecutionNode\":\"geth/x\",\"ConsensusNode\":\"prysm/x\",\"Subnets\":\"00000000000000000000000000000000\"}"
]
},
signature = <signature bytes>
}
9.2 Verifying the Envelope
-
Recompute:
domain = "ssv"
unsigned_message = "ssv" || "ssv/nodeinfo" || payload_bytes
-
Verify signature with
public_key
. -
Parse payload JSON => parse
NodeInfo
=> checknetwork_id
.
Architectural Overview
This section provides developers an overview of the architectural design of Anchor. The intent of this is to help gain an easy understanding of the client and associated code.
Thread Model
Anchor is a multi-threaded client. There are a number of long standing tasks that are spawned when Anchor is initialised. This section lists these high-level tasks and describes their purpose along with how they are connected.
Task Overview
The following diagram gives a basic overview of the core tasks inside Anchor.
graph A(Core Client) <--> B(HTTP API) A(Core Client) <--> C(Metrics) A(Core Client) <--> E(Execution Service) A(Core Client) <--> F(Duties Service) F <--> G(Processor) H(Network) <--> G I(QBFT) <--> G
The boxes here represent stand alone tasks, with the arrows representing channels between these tasks. Memory is often shared between these tasks, but this is to give an overview of how the client is pieced together.
The tasks are:
- HTTP API - A HTTP endpoint to read data from the client, or modify specific components.
- Metrics - Another HTTP endpoint designed to be scraped by a Prometheus instance. This provides real-time metrics of the client.
- Execution Service - A service used to sync SSV information from the execution layer nodes.
- Duties Service - A service used to watch the beacon chain for validator duties for our known SSV validator shares.
- Network - The p2p network stack (libp2p) that sends/receives data on the SSV p2p network.
- Processor - A middleware that handles CPU intensive tasks and prioritises the workload of the client.
- QBFT - Spawns a QBFT instance and drives it to completion in order to reach consensus in an SSV committee.
General Event Flow
Generally, tasks can operate independently from the core client. The main task that drives the Anchor client is the duties service. It specifies when and what kind of validator duty we must be doing at any given time.
Once a specific duty is assigned, a message is sent to the processor to start one (or many) QBFT instances for this specific duty or duties. Simultaneously, we are awaiting messages on the network service. As messages are received they are routed to the processor, validation is performed and then they are routed to the appropriate QBFT instance.
Once we have reached consensus, messages are sent (via the processor) to the network to sign our required duty.
A summary of this process is:
- Await a duty from the duties service
- Send the duty to the processor
- The processor spins up a QBFT instance
- Receive messages until the QBFT instance completes
- Sign required consensus message
- Publish the message on the p2p network.
An overview of how these threads are linked together is given below:
graph LR A(Core Client) <--> B(Duties Service) B <--> C(Processor) C <--> D(Network) C <--> E(QBFT)