This is a major version.
We have finally moved to version 2!
Introduce Thread & Memory pool to process and pull events by batch.
This is highly based on #307 but without the async logic. The reason is to avoid using the kernel NotificationLoop that was dispatching requests to workers (userland thread) sequentially since wake up workers have a high cost of thread context switch.
The previous logic is nice when you want workers to be async (like #307) but as we have threads (and now even a thread poll) dedicated to pull and process events, there is no issue to make them synchronously wait in kernel for new events and directly take them from the pending request list.
The library will start with a single main thread that pulls events by batch and dispatches them to the thread pool but keeps the last one to be executed (or the only one) to be executed on the same thread. Each thread waken will do the same and pull new events at the same time. If none is returned, the thread goes back to sleep and otherwise does the same as the main thread (dispatch and process...etc).
Only the main thread waits indefinitely for new events while others wait 100ms in the kernel before returning back to userland.Batching events, thread and memory pool offers a great flexibility of resources especially on heavy load. Thousands of lines of code were changed in the library (thanks again to @Corillian contribution of full rewrite) but the public API hasn't changed much.
After running multiple benchmarks against memfs, sequential requests are about +10-35% faster but in the real world with the thread pool the perf are way above. @Corillian full rewrite of FindFiles
actually improved an astonishing +100-250%...crazy 😱
Much more was added, please see changelog
See here how to migrate an existing > 1.1.0 filesystem to 2.0.0.
Thanks to all the contributors as always (@ATRiiX) !!! and big thanks again to @Corillian who has waited years to get his work merged!