NOTE: If you're upgrading from 0.10, see changes in the milestone releases as well:
Goodbye Scala 2.10 and Java 7
We've finally decided to drop 2.10 and Java 7 support and align with the most recent version of Finagle (6.40 at this point). See #686. For the reference, Finagle and Co dropped 2.10 support in 6.35 (five releases ago).
Server Sent Events
Finch now provides very basic support to SSE. See #655 and the new cookbook example (thanks @rpless).
Long story short, serving AsyncStream[io.finch.sse.ServerSentEvent[A]]
, for which, cats.Show[A]
is defined will stream generated events back to the client until the connection is closed or the stream is empty.
New Decoding Machinery
In 0.11-M1 we've made a decent progress on making encoding in Finch less magical by introducing a type-level content type. This removed a lot of ambiguity when more than one implicit encoder present in the scope.
This time, we did a similar trick for Decode
type class. Now it embeds a type-level string indicating a content-type this decoder can decode. This highlighted an obvious difference between decoding query-string param as Int
and HTTP body as JSON. These are clearly two different use case that should be handled separately. Thus there is a new type class for decoding HTTP entities (not bodies) in Finch: DecodeEntity
that (1) doesn't embed a content-type and (2) decodes from String
. Decode
, on the other hand, know about content-type and (now) decodes from Buf
(as it should). See #663 for more details.
This work not only made the decoding story in Finch more explicit (clear separation of concerns, decoders for HTTP bodies are now resolved w.r.t. their content-types) but also allowed a performance optimization for body*
endpoints. Given that we now can decode directly from Buf
, we can eliminate one extra copy of going from Buf
to String
(to satisfy the DecodeEntity
input type). At this point, we managed to improve the throughput on out end-to-end test by 12.5%. See #671 (thanks @rpless) for more details.
All these benefits came with the cost of breaking API. See API changes for more details.
Decoding with Content-Type
Now that we have type-level content-type in place, we can enforce the implicit resolution to pick the right decoder for an expected request. We introduce a new API for body*
endpoints that also accept the desired content-type. See #695 for more details.
NOTE: body.as[User]
is still supported and defaults to body[User, Application.Json]
.
// before (deprecated in this version)
body.as[User]
// after
body[User, Application.Json]
// or using an alias method
jsonBody[User]
Previously, it was also possible to run as[User]
to decode a User
from JSON sent as a query string param or header. This indeed really powerful and allows some questionable design patterns, which are not necessary useful. Instead of being super generic here, we're trying to reduce the number of ways things could be built. This not only makes them easy to reason about but also quite efficient (b/c you now, specialization).
That said, we're promoting a new way of decoding HTTP payloads. Instead of using .as[A]
, we make it less powerful and more explicit. By limiting the responsibility of the Decode
type-class, we tight it directly with HTTP payloads. This means constructing body*
endpoints could be done in a single step, instead of producing 3-nested structures hence reduce allocations.
Quick experiments showed that we could save 15% of running time and 20% of allocations by just beeing explicit (json2
is the new body[A, Application.Json]
, json
is body.as[A]
).
[info] BodyBenchmark.json avgt 6 4824.580 ± 1205.444 ns/op
[info] BodyBenchmark.json:·gc.alloc.rate.norm avgt 6 5896.004 ± 147.449 B/op
[info] BodyBenchmark.json2 avgt 6 4179.209 ± 673.098 ns/op
[info] BodyBenchmark.json2:·gc.alloc.rate.norm avgt 6 4936.004 ± 73.723 B/op
[info] BodyBenchmark.jsonOption avgt 6 4335.755 ± 150.928 ns/op
[info] BodyBenchmark.jsonOption:·gc.alloc.rate.norm avgt 6 5712.004 ± 0.001 B/op
[info] BodyBenchmark.jsonOption2 avgt 6 4050.681 ± 685.263 ns/op
[info] BodyBenchmark.jsonOption2:·gc.alloc.rate.norm avgt 6 4940.004 ± 36.862 B/op
Fixing Mistakes in Errors
Some of the Finch's core components, including its error types, were designed a couple of years ago. Mistakes were made. We acknowledge it and fixing them now. See #694 for the full discussion.
Here is the summary of what's changed:
- A misleading type for error accumulation (
RequestErrros
) now represents a flat non-empty list of Finch's own errors (previously, a recursiveSeq
of genericThrowable
s). Technically, the new type tells exactly what happens at runtime (which is exactly why need types) - we always flatten errors while collecting, not nest them. - Now product endpoint only accumulates Finch's own errors and fails-fast with the first non-Finch error observed. We think this is a sane default behavior given that's not safe to keep evaluating endpoints while one of them failed with an unknown reasons that could have side-affected an entire application.
Lift Your Stuff
Endpoint.liftX
is a collection of factory methods allowing to build Finch Endpoint
s out of anything. Presumably, these could be useful for wrapping functions returning arbitrary value within an [Endpoint] context.
// The following endpoint will recompute a random integer on each request.
val nextInt: Endpoint[Int] = Endpoint.lift(scala.util.random.nextInt)
Behaviour Changes
- Finch now defines a very basic instance of
Encode.Aux[Exception, ?]
that is polymorphic to the content type. This means, if noEncode[Exception]
is provided for a given content-type, Finch app will still compile using this default instance (see #683).
API Changes
body
andbodyOption
endpoints now returnBuf
instead ofString
. Thusbody.as[User]
should still work as expected, but there is a newstringBody
instance that might be used in place ofbody
where a UTF-8 string of an HTTP body is expected.Encode
instance forEither
is removed (this shouldn't be defined in Finch). See #689.
Bug Fixes
- Extra new line character on streamed responses (see #652, thanks @ilya-murzinov)
- Exceptions encoded in JSON are now properly char-escaped (see #680, thanks @akozhemiakin)