Framework and core calculator improvements
- Update tensors_to_image_calculator.cc
- Fix incorrect name in ValidateRequiredSidePacketTypes status message.
- Delegate memory-mapping the model file to the resource system
- Add documentation for GpuOrigin::DEFAULT
- Add multiclass nms options for object detector.
- Add static helpers to timestamp classes
- Add Dockerfiles to allow users to build their own wheels
- Remove std::aligned_storage.
- Nit: add details to "no implementation available" error message
- Remove use of designated initializers in tflite_model_loader.cc
- Add resample_time_series_calculator.
MediaPipe Tasks update
This section should highlight the changes that are done specifically for any platform and don't propagate to
other platforms.
Android
- Make LLM classes non-final to support mocking.
- Adds TopP parameter in the LLM Inference API.
- Add CPU / GPU options in Java LLM Inference Task.
- Do not require Proto types in public API.
Javascript
- [Web LLM] Fix for duplicate timestamp issue that could occur when loading two LoRA models in immediate succession
- Return error code and file error message in C API for both PredictSync and PredictAsync
- Added isIdle function to check whether web LlmInference instance is ready for work.
- Make the parameters for generateResponse optional.
Model Maker changes
- Enable the option of exporting a model with a fixed batch size.
- Use Optional[int] instead of int | None for pre 3.10 python
- Make LLM classes non-final to support mocking.
- Adds TopP parameter in the LLM Inference API.