github NomicFoundation/hardhat hardhat@2.9.0
Hardhat v2.9.0: performance improvements

latest releases: hardhat@2.22.15, hardhat@2.22.14, hardhat@2.22.13...
2 years ago

This release of Hardhat is packed with performance improvements:

  • Hardhat Network got a new RPC method that lets you mine large amounts of blocks instantly (hardhat_mine).
  • Tests can be run in parallel now thanks to a newer version of Mocha.
  • Forking mainnet and other remote networks is faster — We saw a 2x improvement in our tests!
  • Contracts are now compiled in parallel, decreasing compilation times.

Read on to learn more about these and other changes

Instantly mining multiple blocks with hardhat_mine

This release adds a heavily requested feature: the possibility of mining multiple blocks in constant time.

Hardhat has always let you mine a new block using the evm_mine RPC method. This is useful in several scenarios, but sometimes you want to mine a large number of blocks and the only way to do it is to call evm_mine that many times. For thousands of blocks, this can be prohibitively slow.

Starting from Hardhat v2.9.0, you can use the hardhat_mine instead, which instantly mines any number of blocks:

// mine 256 blocks
await hre.network.provider.send("hardhat_mine", ["0x100"]);

You can also pass a second, optional parameter to specify the interval in seconds between the timestamps of each block:

// mine 1000 blocks with an interval of 1 minute
await hre.network.provider.send("hardhat_mine", ["0x3e8", "0x3c"]);

You can rely on the first and last block of the sequence produced by hardhat_mine being valid blocks, but most of the rest may not technically be so. Specifically, they can have an invalid parent hash, the coinbase account will not have been credited with block rewards, and the baseFeePerGas will be incorrect.

Also note that blocks created via hardhat_mine may not trigger new-block events, such as filters created via eth_newBlockFilter and WebSocket subscriptions to new-block events.

Faster remote network forking

We’ve optimized how we query data from remote nodes when forking from them. This led to mainnet forking working significantly faster.

You don't need to do anything to benefit from this improvement.

Running tests in parallel

The test task now supports three new options: --parallel, --bail and --grep.

The --parallel flag runs your tests in parallel. Most of the time, this should produce the same results as running your tests serially, but there are some scenarios where tests run in parallel will behave differently. You can learn more in our parallel tests guide.

The --bail flag can be used to stop the test runner as soon as some test fails. Keep in mind that this is best-effort when used in combination with parallel mode, as some tests from other test workers might continue to be executed after the first failure.

The --grep parameter can be used to filter which tests will be executed. For example, if you run hh test --grep foo, only tests and suites that have the string foo in their descriptions will be run.

Parallel Solidity compilation

Starting from this version, files are now compiled in parallel if possible. A typical scenario where this can produce significant speedups is a project that uses multiple compiler versions.

Other changes

  • Added support for BIP39 passphrases (thanks @zhuqicn!)
  • Preserve any existing user's README when initializing a project (#1942)
  • The test task now works correctly when a test file starts with ./ (#2220)
  • A warning is now shown when a node version greater than the current LTS is used.

Don't miss a new hardhat release

NewReleases is sending notifications on new releases.