This is a non-urgent, pre-release update for hoodi operators containing improved monitoring (swapping promtail for alloy), alerting (add multiple Discord IDs to get tagged for issues), tracing (allowing us to better debug proposals), and updated client versions (all clients updated to their latest releases).
Please note the mandatory adjustment of .env vars required since v0.2.3 if updating to this release.
Warning
Breaking Change
This release adjusts how metrics are sent to Obol. After this update, you will no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It will now be generated at runtime by ./prometheus/run.sh, and the required variables will be injected into the file from your .env. The steps you need to take to handle this change are described below.
-
If you currently have a locally modified
prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value ofPROM_REMOTE_WRITE_TOKENfrom theauthorization.credentialssection of that file. -
This token must now be provided via your
.envfile instead of being defined inprometheus.yml. The variablePROM_REMOTE_WRITE_TOKENis present (commented out) in all.env.sample.*files. To configure it, copy the variable to your own.env, uncomment it and set your token, for example:
PROM_REMOTE_WRITE_TOKEN=obolH7d...
-
We are now adding a dedicated way to add discord IDs to your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment)
ALERT_DISCORD_IDS=in your.env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed yourCHARON_NICKNAMEto your discord ID, you can set that back to a human friendly name for your node. -
If you have any other custom modifications to your original
prometheus.yml, compare it againstprometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added toprometheus.yml.example. -
With your environment variables set, and any other modifications ported to the
prometheus.yml.examplefile; rundocker compose up -das normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the oldprometheus.yml.
Important
To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.
Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.
(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)
CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"
Important
Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.
Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.
To update to this version, please run the following commands:
# Stop the node
docker compose down
# Save any local changes
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.2.10-rc2
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -dNote
lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.
Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.
What's Changed
- Update version to v1.8.0 by @github-actions[bot] in #227
- Bump stack versions by @KaloyanTanev in #228
- Update version to v1.8.1 by @github-actions[bot] in #232
- Add ALERT_DISCORD_IDS label and env variable by @DiogoSantoss in #218
- OTLP setup by @pinebit in #220
- Migrate to Grafana Alloy as the recommended replacement for log forwa… by @apham0001 in #201
- feat(dashboard): add Hoodi network support and update queries by @qwe638853 in #230
- Update prysm and nimbus vc by @OisinKyne in #233
New Contributors
- @qwe638853 made their first contribution in #230
Full Changelog: v0.2.9...v0.2.10-rc2