Changelog
1.4 is now 2.0
Hasura 1.4 has significant feature and product enhancements that warranted bumping the major version up! The new release is now called v2.0.0-alpha.1
(which would have been equivalent to v1.4.0-alpha.3
).
However, there are no major user-facing breaking changes and users can upgrade Hasura 1.3 to 2.0 seamlessly. There are 2 minor behaviour changes (which only affects certain corner cases and has been requested from the community for long) in this release, please find the details in "Behaviour changes" section at the bottom.
Hasura 2.0 highlights
Multiple databases
Simultaneously add multiple Postgres databases:
Metadata storage separation
You can now store Hasura metadata in a separate (PG) database given by the environment variable
HASURA_GRAPHQL_METADATA_DATABASE_URL
Generalized backend
Generalization code to support different databases (“MS SQL server” being the first one we will release as a part of 2.0)
REST APIs
Support for creating idiomatic REST API endpoints from GraphQL query templates so that users can support non-GraphQL clients easily and integrate with their existing REST tooling.
Remote Schema Permissions
Role based schemas and argument presets (based on session variables) are now available for remote schemas.
Volatile functions as mutations/queries
You can now track VOLATILE Postgres functions as mutations (or even queries).
Other fixes and improvements:
- server: add
allow_inconsistent_metadata
option toreplace_metadata
API to allow adding objects which are inconsistent - server: add "resource_version" field to metadata for concurrency control - disable lookup during migrations
- server: add request field to webhook POST body containing the GraphQL query/mutation, its name, and any variables passed (close #2666)
- server: add --websocket-compression command-line flag for enabling websocket compression (fix #3292)
- server: some mutations that cannot be performed will no longer be in the schema (for instance, delete_by_pk mutations won't be shown to users that do not have select permissions on all primary keys) (#4111)
- server: treat the absence of backend_only configuration and backend_only: false equally (closing #5059) (#4111)
- server: accept only non-negative integers for batch size and refetch interval (close #5653) (#5759)
- server: Configurable websocket keep-alive interval. Add --websocket-keepalive command-line flag and HASURA_GRAPHQL_WEBSOCKET_KEEPALIVE env variable (fix #3539)
- server: introduce optional custom table name in table configuration to track the table according to the custom name. The set_table_custom_fields API has been deprecated, A new API set_table_customization has been added to set the configuration. (#3811)
- server: support joining Int or String scalar types to ID scalar type in remote relationship
- server: add support for
regex
operators (close #4317) (#6172) - server: do not block catalog migration on inconsistent metadata
- server: various changes to ensure timely cleanup of background threads and other resources in the event of a SIGTERM signal.
- server: fix issue with event triggers defined on a table which is partitioned (fixes #6261)
- server: action array relationships now support the same input arguments (such as where or distinct_on) as usual relationships
- server: action array relationships now support aggregate relationships
- server: fix issue with non-optional fields of the remote schema being added as optional in the graphql-engine (fix #6401)
- server: accept new config allowed_skew in JWT config to provide leeway for JWT expiry (fixes #2109)
- server: always log the request_id at the detail.request_id path for both query-log and http-log (#6244)
- server: fix issue with --stringify-numeric-types not stringifying aggregate fields (fix #5704)
- server: terminate a request if time to acquire connection from pool exceeds configurable timeout (#6326)
- server: support tracking of functions that return a single row (fix #4299)
- console: allow user to cascade Postgres dependencies when dropping Postgres objects (close #5109) (#5248)
- console: mark inconsistent remote schemas in the UI (close #5093) (#5181)
- console: add onboarding helper for new users
- console: add option to flag an insertion as a migration from Data section (close #1766) (#4933)
- console: down migrations improvements (close #3503, #4988) (#4790)
- console: show only compatible postgres functions in computed fields section (close #5155) (#5978)
- console: add session argument field for computed fields (close #5154) (#5610)
- console: add tree view for Data Tab UI
- cli: add missing global flags for seed command (#5565)
- cli: allow seeds as alias for seed command (#5693)
- build: add test_server_pg_13 to the CI to run the server tests on Postgres v13 (#6070)
Behaviour changes
-
Multiple mutations in same request are not transactional
UPDATE (since v2.0.0-cloud.3): For only Postgres data source, multiple fields in a mutation will be run in one transaction to preserve backwards compatibility.
-
Semantics of explicit "null" values in "where" filters have changed
According to the discussion in issue 704, an explicit
null
value in a comparison input object will be treated as an error rather than resulting in the expression being evaluated toTrue
.For example: The mutation
delete_users(where: {id: {_eq: $userId}}) { name }
will yield an error if$userId
isnull
instead of deleting all users.UPDATE (since v2.0.0-cloud.10): The old behaviour can be enabled by setting an environment variable:
HASURA_GRAPHQL_V1_BOOLEAN_NULL_COLLAPSE: true
. -
Semantics of "null" join values in remote schema relationships have changed
In a remote schema relationship query, the remote schema will be queried when all of the joining arguments are not
null
values. When there arenull
value(s), the remote schema won't be queried and the response of the remote relationship field will benull
. Earlier, the remote schema was queried with thenull
value arguments and the response depended upon how the remote schema handled thenull
arguments but as per user feedback, this behaviour was clearly not expected. -
Order of keys in objects passed as "order_by" operator inputs is not preserved
The
order_by
operator accepts an array of objects as input to allow ordering by multiple fields in a given order, i.e.[{field1: sortOrder}, {field2: sortOrder}]
but it is also accepts a single object with multiple keys as an input, i.e.{field1: sortOrder, field2: sortOrder}
. In earlier versions, Hasura's query parsing logic used to maintain the order of keys in the input object and hence the appropriateorder by
clauses with the fields in the right order were generated .As the GraphQL spec mentions that input object keys are unordered, Hasura v2.0's new and stricter query parsing logic doesn't maintain the order of keys in the input object taking away the guarantee of the generated
order by
clauses to have the fields in the given order.For example: The query
fetch_users(order_by: {age: desc, name: asc}) {id name age}
which is intended to fetch users ordered by their age and then by their name is now not guaranteed to return results first ordered by age and then by their name as theorder_by
input is passed as an object. To achieve the expected behaviour, the following queryfetch_users(order_by: [{age: desc}, {name: asc}]) {id name age}
should be used which uses an array to define the order of fields to generate the appropriateorder by
clause. -
Incompatibility with older Hasura version remote schemas
With v2.0, some of the auto-generated schema types have been extended. For example,
String_comparison_exp
has an additionalregex
input object field. This means if you have a Hasura API with an older Hasura version added as a remote schema then it will have a type conflict. You should upgrade all Hasura remote schemas to avoid such type conflicts. -
Migrations are not executed under a single transaction
While applying multiple migrations, in earlier Hasura CLI versions all migration files were run under one transaction block. i.e. if any migration threw an error, all the previously successfully executed migrations would be rolled back. With Hasura CLI v2.0, each migration file is run in its own transaction block but all the migrations are not executed under one. i.e. if any migration throws an error, applying further migrations will be stopped but the other successfully executed migrations up till that point will not be rolled back.
Using this release
Use the following docker image:
hasura/graphql-engine:v2.0.0-alpha.1