This update adds a few improvements as well as some extra features.
One thing I'm not sure about is if I fixed a common NCNN issue. Please let me know if this solves any of your previous issues, if you had any.
Dependency Updates
- NCNN
- NCNN now auto updates if installed, so you don't have to update it manually anymore. This will ensure that any changes I make to the NCNN bindings will not cause chaiNNer to suddenly stop working due to outdated NCNN.
New Features
- Add ONNX execution options to settings (#931)
- This allows you to select a GPU to use for ONNX processing, as well as picking an execution engine. If you have TensorRT set up properly on your system, this also means you can select TensorRT. This should theoretically give you much faster speeds when doing batch processing (just make sure to put the load model node outside the iterator. TensorRT takes a long time to convert the model to an engine)
- Reporting all type mismatches (#939) (thanks @RunDevelopment)
- We will now warn you if nodes that were previously compatible have suddenly become incompatible due to an upstream change, even if no custom error message has been set.
- "Soft light" blend mode (#941) (thanks @JustNoon)
- This is a new blend mode in the Blend Images node
- Show proper error message on integrated python download failure (#949) (thanks @RunDevelopment)
New Nodes
- Copy To Clipboard (#920) (thanks @Sryvkver)
- This node allows copying an image, text, or number to the clipboard. You can find it in the utilities section.
Other Changes
- Instead of attempting to update the required dependencies every startup, it will now do so only when needed. (#934)
- Increased the max amount of VRAM PyTorch will use before tiling further in auto mode. Should improve performance a little bit more (#940)
- PyTorch's Convert To NCNN node now will not hide the outputs of the node when ONNX is not installed and instead will warn the user about it when attempting to run it. (#952)
Bug Fixes
- Fix ONNX nodes reloading on every upscale. Now it loads once in Load Model as it should. (#933)
- Fixed the "FATAL ERROR!" message some users would get in their logs with NCNN during upscaling. (#947)
- Potentially fixed other NCNN upscale issues, but need users to confirm for me.
- Fixed NCNN GPU selector order problem (#948)
- Improved modulo typing in Math node (#938) (thanks @RunDevelopment)
- Added pow typing in Math node (#936) (thanks @RunDevelopment)