v1.8.0 - 2025-12-17
This is Charon's v1.8.0 release. Feedback is welcome and appreciated, please use github issues or discord if you have trouble with this release.
Partial deposit submission and retrieval
Support for submitting and fetching partial validator deposits via the new charon deposit sign and charon deposit fetch commands. This enables operators to update deposit data for validators after cluster creation but before activation.
This feature is particularly useful for large clusters where only a subset of validators are activated initially, and business or operational requirements may change over time. Operators can re-sign updated deposit data, with partial signatures aggregated once a threshold is reached, after which the full deposit data can be retrieved with charon deposit fetch or from the Obol API.
Read the rest of the release notes for more:
Full Changelog: v1.7.3..v1.8.0
Feature
charon deposit sign&&charon deposit fetch#3992 (#4032)- Update minimum version warning log for fusaka fork #3978 (#4000)
- Track blinded/unblinded block building from the BN #4106 (#4117)
- Add
Date-MillisecondsandX-Timeout-Mstocharon alpha test mev#4045 (#4074)
Bug
- Unknown git hash in Charon v1.5.2 pre-built binary #4001 (#4057)
- Signed block fetcher retries on missed blocks #4035 (#4041)
Refactor
- Catch HTTP 409 Duplicate Request gracefully in
charon dkg --publish#4107 (#4121) - Errors revamp #3882 (#4050,#4044,#4046,#4047,#4038)
- Remove the
prettykey from JSON logging #3924 (#4005) - Move chain split check to pre-prepare round (#4011)
- Add
eth2wraplogic to proxy (#4042)] - Builder registration redesign (#4060)
Test
Misc
Compatibility Matrix
This release of Charon is backwards compatible with Charon v1.0.*, v1.1.*, v1.2.0, v1.3.*, v1.4.*, v1.5.*, v1.6.*, v1.7.*. Though only v1.3.* and newer are Pectra-ready and v1.7.* and newer are Fulu-ready.
The below matrix details a combination of beacon node (consensus layer) + validator clients and their corresponding versions the DV Labs team have tested with this Charon release. More validator and consensus client will be added to this list as they are supported in our automated testing framework.
Legend
- ✅: All duties succeed in testing
- 🟡: All duties succeed in testing, except non-penalised aggregation duties
- 🟠: Duties may fail for this combination
- 🔴: One or more duties fails consistently
| Validator 👉 Consensus 👇 | Teku v25.12.0 ❗ | Lighthouse v8.0.1 | Lodestar v1.38.0 | Nimbus v25.11.1 | Prysm v7.1.0 | Vouch 1.12.0 ❗ |
|---|---|---|---|---|---|---|
| Teku v25.12.0 ❗ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟡 |
| Lighthouse v8.0.1 | ✅ | ✅ | ✅ | ✅ | ✅ | 🟡 |
| Lodestar v1.38.0 | ✅ | ✅ | ✅ | ✅ | ✅ | 🟡 |
| Nimbus v25.11.1 | ✅ | ✅ | ✅ | ✅ | ✅ | 🟡 |
| Prysm v7.1.0 | ✅ | ✅ | ✅ | ✅ | ✅ | 🟡 |
| Grandine v2.0.1 | ✅ | ✅ | ✅ | ✅ | ✅ | 🟡 |
Note
There is currently an incompatibility between validator clients that may cause attestation aggregation duties to fail. Aggregation duties are not economically rewarded nor punished for their completion.
To ensure aggregations succeed; have at least threshold of nodes in the cluster running one of Lodestar, Lighthouse, and Nimbus, or alternatively; have a threshold of nodes in the cluster running one of Teku and Prysm. This incompatibility will be remediated in upcoming client releases.
Warning
Lodestar's validator client's default behaviour in version <v1.37.0 is to skip the next slot if it fails an attestation or aggregation. This can impact your cluster's performance, particularly if you have more than the fault tolerance threshold of your cluster running Lodestar's validator client, and many validators running in the cluster. This has been fixed in v1.37.0.
If your cluster is not successfully aggregating, you should ideally swap to a set of compatible validator clients listed above, along with ensuring your clients have the appropriate --distributed flag set to enable distributed aggregation mode.