github aptos-labs/aptos-core aptos-node-v1.11.0
[Testnet] Aptos Node Release v1.11.0

latest releases: aptos-cli-v3.2.0, aptos-framework-v1.11.0, aptos-node-v1.10.2...
16 days ago

Release Hash: 1e997b54840870a37869dd724d89c6d2e7e893ce

CLI Version this release is compatible with: v3.1.0

Docker image tag: aptos-node-v1.11.0

Validator Update Required? Yes, by Apr 15th

Fullnode Update Required? Yes, by Apr 18th

Aptos Improvement Proposals (AIPs)

Check out all of our AIPs and discussions here on GitHub.

  • AIP-31 - Allowlisting for delegation pool
    • This AIP empowers delegation pool owner to define which addresses is allowed to stake to the pool.
    • Feature flag: DELEGATION_POOL_ALLOWLISTING
  • AIP-77 - Multisig V2 Enhancement
    • This AIP proposes to enhance the Multisig V2 by (1) limiting the maximum number of pending transactions, (2) introducing batch operation functions, and (3) implicitly voting on the transaction execution.
    • Feature flag: MULTISIG_V2_ENHANCEMENT
  • AIP-78 - Aptos Token Objects Framework Update
    • 3 modules in the aptos-token-objects framework were upgraded.
      • collection.move
        • Functions added to change name and max supply for a collection
      • token.move
        • Functions added to create a token with a provided seed as well as token name.
        • Functions added to create tokens by specifying the collection via Object<Collection> rather than collection_name.
      • property_map.move
        • Function added to create a move a PropertyMap via ExtendRef.

Breaking Changes

  • Added some helper native functions in randomness.move module which require v1.11 binary.
  • Added a helper native function in consensus_config.move module which requires v1.11 binary.

Aptos Blockchain

General

  • Store VM debug information on side. This helps decouple the error code and error message from the hash calculation and also pass some error message that was not previously passed from VM to developers
  • [Randomness building block; feature-gated] New top-level component DKG runtime that when the current epoch expires, executes a Distributed Key Generation (DKG) protocol off-chain with peers and puts a verifiable DKGResult validator transaction into the validator transaction pool.
    • A DKGResult validator transaction, once executed, makes the randomness keys available for the next validator set and switch epoch.
    • The following related consensus logics have been enabled in 1.10.
      • When proposing a block, also pull from validator transaction pool.
      • When verifying a block proposal, reject it if a validator transaction is present without its corresponding feature enabled.

Consensus

  • [Randomness building block; feature-gated] When broadcast a vote for a block, also broadcast randomness share for the parent of the block.
  • [Randomness building block; feature-gated] Added a new component RandManager that:
    • receives ordered block stream from consensus
    • ensures every block has randomness (by exchanging randomness shares with peers)
    • forward randomness-ready block stream to BufferManager

Mempool

  • Mempool was updated to more intelligently select peers when forwarding transactions. This helps to reduce end-to-end transaction latencies and improves transaction propagation reliability.

VM

  • [Randomness building block] support new validator transaction variant: DKGResult.
    • Indirectly feature-gated: it is ensured that DKGResult validator transaction won’t be ordered when the feature is disabled. Also, since it’s not a user transaction, simulation is not a thing and won’t be an issue.

Latency Reduction

  • Add a long poll API /transactions/wait_by_hash/:txn_hash. The motivation is to reduce e2e latencies by learning of committed responses immediately.
  • Broadcast proposal votes to reduce one hop latency. Changes to our consensus protocol to broadcast proposal vote instead of sending it to the next leader. This way we reduce one hope latency (100 ms) to aggregate the QC.
  • All validators broadcast commit vote messages. Similar to above change but this is for commit vote and reduces one hop on commit vote aggregation.
  • Reduce mempool poll time from 50 ms to 10ms. This reduces the mempool polling time on PFNs from 50 ms to 10 ms thus reducing the latency per transaction by 25-40 ms.
  • Reduce latency of cloning network sender using Arc pointers. A minor optimization to reduce expensive clone in conesnsus - doesn’t help much with latency under normal load but helps when validators fall behind and fetch blocks remotely.
  • Adding inline transactions in payload to reduce latency. Changes to our quorum store protocol to allow inlining transactions in a QS batch (up to certain max number). On an average this change reduces the latency of the transaction by one round trip (200 ms).
  • Improve peer selection in Mempool. This change improves the peer selection logic for mempool (i.e. deciding which peers to forward transactions to):
    • Currently, the logic just selects peers based on network types and roles, but this is somewhat inefficient (especially when nodes define seeds that override more peformant peers).
    • To avoid this, the change updates the selection logic to prioritize peers based on: (i) network type; (ii) distance from the validators (e.g., to avoid misconfigured and/or disconnected VFNs); and (iii) peer ping latencies (i.e., to favour closer peers).
    • To avoid excessively reprioritizing peers (which can be detrimental to mempool under load), we only update the peer priorities when: (i) our peers change; (ii) we’re still waiting for the peer monitoring service to populate the peer ping latencies; or (iii) if neither (i) or (ii) is true, we update every ~10 minutes (configurable). This avoids overly reprioritizing peers at steady state.
    • Testing - New and existing test infrastructure. We also ran several PFN-only tests to ensure that the average broadcast latencies are reduced.

Framework

  • [Randomness building block] Any on-chain config that is loaded in rust every block/every epoch now has to buffer updates until the next epoch, instead of applying them directly. Old update functions (e.g., consensus_config::set() ) are disabled. New functions (e.g, consensu_config::set_for_next_epoch()) are added. This applies to future on-chain configs.
    • List of existing on-chain configs affected.
      • ConsensusConfig
      • ExeuctionConfig
      • GasSchedule
      • Features
      • Version
      • JWKConsensusConfig
      • RandomnessConfig
    • Typically, feature-gating is achieved by on-chain configs. This change affects on-chain configs themselves, so can’t be easily feature-gated.
  • [Randomness building block] Added operations to support async reconfiguration with DKG in reconfiguration_with_dkg.move.
    • Indirectly feature-gated: it does not have public access and all private callers are feature-gated.
  • [Randomness building block] Validator set changes (e.g., validator join/leave) are now rejected when a reconfiguration is in progress.
    • Indirectly feature-gated: reconfiguration will be instant and won’t be “in progress” when randomness feature is off.
  • [Randomness building block] New module randomness.move to hold per-block randomness seed and some naive randomness APIs.

Resolved Issues

Bug Fixes

  • Track (for counters) all the txns in consensus payload size during very high gas pressure scenarios where we want to include only a subset of those txns in a block.

Don't miss a new aptos-core release

NewReleases is sending notifications on new releases.