CODE_COLOR: CODE_YELLOW_MAINNET
RELEASE_VERSION: 2.7.0
PROTOCOL_UPGRADE: TRUE
DATABASE_UPGRADE: TRUE
SECURITY_UPGRADE: FALSE
[2.7.0]
Note: this is an upgrade from protocol version 77 directly to 79.
Protocol Changes
One more shard will be added
A new shard layout for production networks (#13324), using 650
as the split boundary #13609. The number of shards will increase from 8 to 9 shards.
Non-upgraded validators will be kicked out after the voting epoch
When the protocol update version voting takes place, validators that did not upgrade to the latest version will be scheduled for removal (aka kickout) in the epoch the new version takes effect (#13375). This helps avoid missed blocks in the first epoch of the new version, as un-upgraded validators would produce invalid blocks. Upgrading the node after the voting took place won’t prevent the kickout.
- With 2.7.0, non-upgraded nodes won’t be able to progress after the voting epoch.
- In future network upgrades, non-upgraded nodes would be able to progress in the epoch between voting and the network upgrade.
If you don’t upgrade your node in time, it will corrupt your database and require a reset (e.g. from snapshot or using Epoch Sync).
Other changes
-
Implement NEP-536: Reduce the number of refund receipts by removing pessimistic gas pricing. Also introduce a gas refund penalty, but set it to 0 to avoid potential negative impact. (#13397)
-
Implement P2P sync for state sync headers. (#13377)
-
Increased the threshold for rejecting transactions due to missing chunks to 100 chunks, improving resilience during high congestion periods. (#13881)
-
Enable saturating float-to-int conversions in runtime. (#13414)
Non-protocol Changes
-
Add RPC query for viewing global contract code. (#13547)
-
Add promise batch host functions for global contracts. (#13565)
-
Stabilize
EXPERIMENTAL_changes
RPC method and rename it tochanges
. (#13722) -
Rename
TxRequestHandlerActor
toRpcHandlerActor
to reflect the change in the scope of its responsibilities. Otherwise its API change is fully backward-compatible, so the dependent services can handle it by simply renaming the type where it is mentioned explicitly. (#13259) -
Improved state sync reliability by removing the assumption that the fallback source will eventually succeed. Sync now keeps trying both sources until the required part is obtained. (#13891)
Protocol upgrade voting
Voting for protocol version 79 will start on Monday 2025-08-18 01:00:00 UTC.
- You MUST upgrade your node before this time to continue participating in consensus.
Protocol upgrade to version 79 is expected to happen 7-14 hours later on Monday 2025-08-18 between 08:00 and 15:00 UTC.
Notes
Hardware requirements
After the binary update until the resharding (in protocol version 79), nodes that track the state need to load shard 0 into memory. This includes RPC, archival, indexers and validators with the exception of chunk validator nodes (outside of top 100) because they do not track any shards. To successfully go through resharding these nodes need at least 64GB
of memory.
As with all the hardware requirements, validators that expect to become producers are also encouraged to have 64GB
of memory.
The high memory requirements are in place from the moment of the binary update until the resharding process is finished in the epoch where protocol version 79 is adopted. After that, nodes that do not load memtries can be downscaled.
Recovery if it crashes during resharding
If neard is restarted or crashes immediately after the transition to the new protocol upgrade 79 (SimpleNightshadeV6
), there's a possibility that the process won't be able to start correctly because resharding got interrupted.
The error you might see is:
Chain(StorageError(MemTrieLoadingError("Cannot load memtries when flat storage is not ready for shard s10.v3, actual status: Resharding(CreatingChild)")))
To remediate run the following command until completion:
neard flat-storage resume-resharding --shard-id 0
And finally start neard again.
Rollback from 2.7 to 2.6
If a node experiences issues after updating from 2.6 to 2.7, it’s possible to roll it back to 2.6, but it requires manual action to undo the database migration.
Rolling back is only possible before the end of the voting epoch - after that the network won’t be compatible with neard 2.6.
Please keep the logs from the rollback process in case you run into any issues.
To roll back a node from 2.7 to 2.6, do the following:
- Stop the node.
- Using a 2.7 neard binary, run
neard database rollback-to26
and confirm the rollback. After confirmation the rollback process will start. Please don’t interrupt the rollback process, it could corrupt the database. Wait for the process to finish, it should be instant. - Database will be rolled back to the version used by neard 2.6.
- Start the node using neard 2.6 binary.
Tracked shards config
Note: this config change is optional, and we will give a clear heads-up before old fields are fully deprecated.
The new way of specifying the tracked shards in config.json
, is to use the tracked_shards_config
field (introduced in #13154).
- For validators, use:
"tracked_shards_config": "NoShards"
. - To track all shards (e.g. RPC or archival nodes), use
"tracked_shards_config": "AllShards"
. - For failover shadow validator, use:
"tracked_shards_config": { "ShadowValidator": "<validator_id>" }
.
This config field cannot be used simultaneously with any of these deprecated fields (doing so will result in a panic at the neard
startup): tracked_shards
, tracked_accounts
, tracked_shadow_validator
, or tracked_shard_schedule
.
Block production delay
It is advised (though not mandatory) to use the reduced (from 100ms to 10ms) block production delay (introduced in #13523). Doing so, would slightly improve your node performance. You can do it by changing the value in config.json
: consensus.doomslug_step_period={"secs":0,"nanos":10000000}
Snapshot migration
For anyone encountering this error after upgrading their node:
ERROR sync: Cannot build state part err=StorageError(StorageInconsistentState("No state snapshot available
This issue should resolve itself after the epoch ends. It is not strictly required to take any action, but if you want to fix it immediately, you can run the following script (set NEAR_HOME
and NEARD_BINARY
appropriately, and run it before starting the new binary):
NEAR_HOME=/home/ubuntu/.near/
NEARD_BINARY=neard
cd $NEAR_HOME
SNAPSHOT_PATH=$(find $NEAR_HOME/data/state_snapshot/ -mindepth 1 -maxdepth 1 -type d | head -n 1)
cp config.json genesis.json node_key.json $SNAPSHOT_PATH
$NEARD_BINARY --home $SNAPSHOT_PATH database run-migrations
rm $SNAPSHOT_PATH/*.json