1.36.0 (2023-10-31)
Features
- Add preview count_tokens method to CodeGenerationModel (96e7f7d)
- Allow the users to use extra serialization arguments for objects. (ffbd872)
- Also support unhashable objects to be serialized with extra args (77a741e)
- LLM - Added
count_tokenssupport to ChatModel (preview) (01989b1) - LLM - Added new regions for tuning and tuned model inference (3d43497)
- LLM - Added support for async streaming (760a025)
- LLM - Added support for multiple response candidates in code chat models (598d57d)
- LLM - Added support for multiple response candidates in code generation models (0c371a4)
- LLM - Enable tuning eval TensorBoard without evaluation data (eaf5d81)
- LLM - Released
CodeGenerationModeltuning to GA (87dfe40) - LLM - Support
accelerator_typein tuning (98ab2f9) - Support experiment autologging when using persistent cluster as executor (c19b6c3)
- Upgrade BigQuery Datasource to use write() interface (7944348)
Bug Fixes
- Adding setuptools to dependencies for Python 3.12 and above. (afd540d)
- Fix Bigframes tensorflow serializer dependencies (b4cdb05)
- LLM - Fixed the async streaming (41bfcb6)
- LLM - Make tuning use the global staging bucket if specified (d9ced10)
- LVM - Fixed negative prompt in
ImageGenerationModel(cbe3a0d) - Made the Endpoint prediction client initialization lazy (eb6071f)
- Make sure PipelineRuntimeConfigBuilder is created with the right arguments (ad19838)
- Make sure the models list is populated before indexing (f1659e8)
- Raise exception for RoV BQ Write for too many rate limit exceeded (7e09529)
- Rollback BigQuery Datasource to use do_write() interface (dc1b82a)