github sqlalchemy/sqlalchemy rel_2_0_26
2.0.26

latest releases: rel_2_0_36, rel_2_0_35, rel_1_4_54...
9 months ago

2.0.26

Released: February 11, 2024

orm

  • [orm] [bug] Replaced the "loader depth is excessively deep" warning with a shorter
    message added to the caching badge within SQL logging, for those statements
    where the ORM disabled the cache due to a too-deep chain of loader options.
    The condition which this warning highlights is difficult to resolve and is
    generally just a limitation in the ORM's application of SQL caching. A
    future feature may include the ability to tune the threshold where caching
    is disabled, but for now the warning will no longer be a nuisance.

    References: #10896

  • [orm] [bug] Fixed issue where it was not possible to use a type (such as an enum)
    within a _orm.Mapped container type if that type were declared
    locally within the class body. The scope of locals used for the eval now
    includes that of the class body itself. In addition, the expression within
    _orm.Mapped may also refer to the class name itself, if used as a
    string or with future annotations mode.

    References: #10899

  • [orm] [bug] Fixed issue where using _orm.Session.delete() along with the
    _orm.Mapper.version_id_col feature would fail to use the
    correct version identifier in the case that an additional UPDATE were
    emitted against the target object as a result of the use of
    _orm.relationship.post_update on the object. The issue is
    similar to #10800 just fixed in version 2.0.25 for the case of
    updates alone.

    References: #10967

  • [orm] [bug] Fixed issue where an assertion within the implementation for
    _orm.with_expression() would raise if a SQL expression that was not
    cacheable were used; this was a 2.0 regression since 1.4.

    References: #10990

examples

  • [examples] [bug] Fixed regression in history_meta example where the use of
    _schema.MetaData.to_metadata() to make a copy of the history table
    would also copy indexes (which is a good thing), but causing naming
    conflicts indexes regardless of naming scheme used for those indexes. A
    "_history" suffix is now added to these indexes in the same way as is
    achieved for the table name.

    References: #10920

  • [examples] [bug] Fixed the performance example scripts in examples/performance to mostly
    work with the Oracle database, by adding the Identity construct
    to all the tables and allowing primary generation to occur on this backend.
    A few of the "raw DBAPI" cases still are not compatible with Oracle.

sql

  • [sql] [bug] Fixed issues in _sql.case() where the logic for determining the
    type of the expression could result in NullType if the last
    element in the "whens" had no type, or in other cases where the type
    could resolve to None. The logic has been updated to scan all
    given expressions so that the first non-null type is used, as well as
    to always ensure a type is present. Pull request courtesy David Evans.

    References: #10843

typing

  • [typing] [bug] Fixed the type signature for the PoolEvents.checkin() event to
    indicate that the given DBAPIConnection argument may be None
    in the case where the connection has been invalidated.

postgresql

  • [postgresql] [usecase] [reflection] Added support for reflection of PostgreSQL CHECK constraints marked with
    "NO INHERIT", setting the key no_inherit=True in the reflected data.
    Pull request courtesy Ellis Valentiner.

    References: #10777

  • [postgresql] [usecase] Support the USING <method> option for PostgreSQL CREATE TABLE to
    specify the access method to use to store the contents for the new table.
    Pull request courtesy Edgar Ramírez-Mondragón.

    References: #10904

  • [postgresql] [usecase] Correctly type PostgreSQL RANGE and MULTIRANGE types as Range[T]
    and Sequence[Range[T]].
    Introduced utility sequence _postgresql.MultiRange to allow better
    interoperability of MULTIRANGE types.

    References: #9736

  • [postgresql] [usecase] Differentiate between INT4 and INT8 ranges and multi-ranges types when
    inferring the database type from a _postgresql.Range or
    _postgresql.MultiRange instance, preferring INT4 if the values
    fit into it.

  • [postgresql] [bug] [regression] Fixed regression in the asyncpg dialect caused by #10717 in
    release 2.0.24 where the change that now attempts to gracefully close the
    asyncpg connection before terminating would not fall back to
    terminate() for other potential connection-related exceptions other
    than a timeout error, not taking into account cases where the graceful
    .close() attempt fails for other reasons such as connection errors.

    References: #10863

  • [postgresql] [bug] Fixed an issue regarding the use of the Uuid datatype with the
    Uuid.as_uuid parameter set to False, when using PostgreSQL
    dialects. ORM-optimized INSERT statements (e.g. the "insertmanyvalues"
    feature) would not correctly align primary key UUID values for bulk INSERT
    statements, resulting in errors. Similar issues were fixed for the
    pymssql driver as well.

mysql

  • [mysql] [bug] Fixed issue where NULL/NOT NULL would not be properly reflected from a
    MySQL column that also specified the VIRTUAL or STORED directives. Pull
    request courtesy Georg Wicke-Arndt.

    References: #10850

  • [mysql] [bug] Fixed issue in asyncio dialects asyncmy and aiomysql, where their
    .close() method is apparently not a graceful close. replace with
    non-standard .ensure_closed() method that's awaitable and move
    .close() to the so-called "terminate" case.

    References: #10893

mssql

  • [mssql] [bug] Fixed an issue regarding the use of the Uuid datatype with the
    Uuid.as_uuid parameter set to False, when using the pymssql
    dialect. ORM-optimized INSERT statements (e.g. the "insertmanyvalues"
    feature) would not correctly align primary key UUID values for bulk INSERT
    statements, resulting in errors. Similar issues were fixed for the
    PostgreSQL drivers as well.

oracle

  • [oracle] [performance] [bug] Changed the default arraysize of the Oracle dialects so that the value set
    by the driver is used, that is 100 at the time of writing for both
    cx_oracle and oracledb. Previously the value was set to 50 by default. The
    setting of 50 could cause significant performance regressions compared to
    when using cx_oracle/oracledb alone to fetch many hundreds of rows over
    slower networks.

    References: #10877

Don't miss a new sqlalchemy release

NewReleases is sending notifications on new releases.