github dolthub/dolt v1.84.0
1.84.0

5 hours ago

Merged PRs

dolt

  • 10724: allow both Geometry and GeometryAddr encoding for existing Geometry columns
  • 10722: go: sqle/remotesrv.go: remotesapi writes against sql-server now trigger DoltDB commit hooks.
    When dolt sql-server runs with --remotesapi-port, incoming dolt push operations write directly to the underlying ChunkStore, bypassing the hooksDatabase wrapper on DoltDB. This meant commit hooks — including push-on-write replication, stats, cluster sync, and auto-GC — were silently skipped for any write that arrived via the remotesapi endpoint rather than through the SQL engine.
    Changes
    sqle/remotesrv.go — the DBCache implementation used by sql-server's remotesrv — previously discarded the *doltdb.DoltDB (with its registered hooks) after extracting the raw ChunkStore. It now returns a hooksFiringRemoteSrvStore wrapper instead of the bare store.
    After a successful Commit(), the wrapper:
    1. Diffs the dataset map at last vs current noms root hashes
    2. Fires ddb.ExecuteCommitHooks() for every ref-typed dataset (refs/heads/, refs/tags/, etc.) whose head address changed or was deleted — matching the behaviour of hooksDatabase
      No changes to remotesrv, doltdb, or the gRPC layer were required.
      Tests
    • sqle/remotesrv_hook_test.go — unit tests covering the basic behavior of firing hooks on an incoming commit; hooks fire for successful commit, hooks do not fire on a rejected (CAS-failed) commit; hooks fire for deleted datasets (matching hooksDatabase.Delete); hooks only fire for datasets that actually changed.
    • integration-tests/bats/sql-server-remotesrv.bats — end-to-end test: starts a sql-server with push-on-write replication to a file remote and --remotesapi-port, performs a dolt push from a client, and asserts the replication remote is updated.
  • 10721: couple random splunk improvements
    This fixes the ability of noms show to print index data (panicking before this change).
    Also patches the splunk tool to allow more than one hash on a single output line to be examined.
    Sample output:
    Table -  {
    1)  	Schema: #q3kfuk6atrj3b6e46ultp97f86gh9crk
    2)  	Violations: #00000000000000000000000000000000
    3)  	Artifacts: #00000000000000000000000000000000
    Autoinc: 0
    4)  	Primary Index (rows 1, depth 1) #ep59g4n947egeqpqfe7ou4ikjvi81rov {
    5) 6)      { key: 7669657700, 6d797600 value:  #jcch47j2ueiiuq31iuuvvbg7ve6inccc,  #53drvdrsgbvfhtnd03tlgbm1ngm72dp4, 4e4f5f454e47494e455f535542535449545554494f4e2c4f4e4c595f46554c4c5f47524f55505f42592c5354524943545f5452414e535f5441424c455300 }
    }
    Secondary Indexes (indexes 0, depth 1) o0te4bsh {
    }
    }
    > 5
    SerialMessage - {
    Blob - create view myv as select * from abc
    }
    > ..
    Table -  {
    1)  	Schema: #q3kfuk6atrj3b6e46ultp97f86gh9crk
    2)  	Violations: #00000000000000000000000000000000
    3)  	Artifacts: #00000000000000000000000000000000
    Autoinc: 0
    4)  	Primary Index (rows 1, depth 1) #ep59g4n947egeqpqfe7ou4ikjvi81rov {
    5) 6)      { key: 7669657700, 6d797600 value:  #jcch47j2ueiiuq31iuuvvbg7ve6inccc,  #53drvdrsgbvfhtnd03tlgbm1ngm72dp4, 4e4f5f454e47494e455f535542535449545554494f4e2c4f4e4c595f46554c4c5f47524f55505f42592c5354524943545f5452414e535f5441424c455300 }
    }
    Secondary Indexes (indexes 0, depth 1) o0te4bsh {
    }
    }
    > 6
    SerialMessage - {
    Blob - {"CreatedAt":0}
    }
    
  • 10719: go: statspro: Add a quiesced state.
    Stats should not continuously consume CPU resources scanning database roots when it knows that stats are up to date and no intervening writes have occurred.
  • 10715: go: store/datas: database_common: On FastForward with working set checks, create the working set with the correct root hash if it does not already exist.
    Previously, we avoided doing this because there is logic in the SQL server which automatically create the working set if it doesn't already exist. However, that logic cannot work in all cases, such as multi-branch access within a SQL transaction. It's more appropriate and more aligned with Dolt's expectations going forward to always create the working set on the branch write if it does not already exist.
    Because of remotesapi.PushConcurrencyControl, this change does not change the behavior of Dolt when writing to a pure remotesapi endpoint, like dolthub.com. In that case, working sets are not typically pushed to the remote and they are not written on a branch write.
  • 10708: Store encoding in typeinfo, and use the serialized encoding as canonical when reading values from disk
    The goal of this change is to honor the encoding of a schema column, as used by values for that column written to disk, when we deserialize it. Currently we compute the encoding for a column from its SQL type, serialize it to disk as part of the schema, then on deserialization throw it away and compute it from the SQL type again. Now, we derive it from the SQL type on first write, and honor the value serialized to disk when reading it back.
    This makes it possible for different versions of Dolt, or the same version with different configuration / settings, to change how the same SQL type is serialized to disk. For 2.0, we will begin serializing BLOB and TEXT types with adaptive encoding by default. Existing columns will continue using the encoding they designated at the time of column creation.
    For compatibility testing, I've expanded our existing integration testing to consider every possible SQL type and all possible DML operations on it.
  • 10707: Remove ValidateTagUniqueness check

go-mysql-server

  • 3473: fix missing check expressions in child expressions of update
  • 3471: Changed some reflection checks to use type equality
    reflect.DeepEquals has issues with some types represented by integrators, so we're using type.Equals in such scenarios.

Closed Issues

  • 10089: Dolt does not show the correct precision for datetime
  • 10706: JSON_OBJECT on datetime field doesn't represent fractional seconds
  • 10698: Interactive rebase fails with 'changes in branch' when ignored tables exist in working set

Don't miss a new dolt release

NewReleases is sending notifications on new releases.