github finagle/finch 0.16.0-M2
Finch 0.16-M2

latest releases: v0.34.1, v0.34.0, v0.33.0...
6 years ago

This milestone release brings new performance improvements (reducing the gap between Finagle and Finch to only 5%) as well as new finch-iteratee module enabling generic request/response streaming.

Iteratee Streaming

Integration with iteratee library was one of the most wanted features in Finch. Travis Brown (@travisbrown) started experimenting with it almost a year ago (see #557) yet that work was only finished now (see #812), by a tireless effort of Sergey Kolbasov (@imliar).

Essentially it's now possible to

  • receive chunked payloads in a form of io.iteratee.Enumerator[Future, A] (when it's known how to io.finch.iteratee.Enumerate[CT, A])
  • send chunked payloads in a form of io.iteratee.Enumerator[Future, B] (when it's known how to io.finch.Encode.Aux[CT, B])

For example, importing both

import io.finch.circe._, io.finch.iteratee._

enables new-line-delimited JSON streaming in Finch.

import io.finch._
import io.finch.circe._
import io.finch.iteratee._
import io.circe.generic.auto._
import io.iteratee._
import com.twitter.util.Future

case class Foo(bar: Int)

// Streaming request payload.
val sum: Endpoint[Int] =
  post("foos" :: enumeratorJsonBody[Foo]) { (foos: Enumerator[Future, Foo]) =>
    val ints: Enumerator[Future, Int] = foos.through(map[Foo, Int](_.bar))
    val sum: Future[Int] = ints.into(fold[Int, Int](0)(_ + _))

    sum.map(Ok) // a future will be completed when whole stream is folded
}

// Streaming response payload.
val foos: Endpoint[Enumerator[Future, Foo]] = get("foos") {
  Ok(enumList(List(Foo(1), Foo(2)))
}

See more details in the docs.

Performance Improvements

I learned this simple technics allowing to reduce allocations around closures from Flavio Brasil (@fwbrasil). Applying them to Finch yield quite significant performance gain (see #807) reducing the gap between Finch and Finagle to just 5% (was 10-12% before).

Here are some numbers for running time (less is better):

NEW:

MapBenchmark.map                             avgt   10   633.837 ±  14.613   ns/op
MapBenchmark.mapAsync                        avgt   10   624.249 ±  22.426   ns/op
MapBenchmark.mapOutput                       avgt   10   734.677 ±  30.215   ns/op
MapBenchmark.mapOutputAsync                  avgt   10   737.170 ±  29.808   ns/op
ProductBenchmark.bothMatched                 avgt   10  1175.716 ±  44.236   ns/op
ProductBenchmark.leftMatched                 avgt   10    26.510 ±   2.335   ns/op
ProductBenchmark.rightMatched                avgt   10     5.081 ±   0.112   ns/op

OLD:

MapBenchmark.map                             avgt   10   624.174 ±  28.621   ns/op
MapBenchmark.mapAsync                        avgt   10   647.369 ±  30.775   ns/op
MapBenchmark.mapOutput                       avgt   10  1221.053 ±  39.319   ns/op
MapBenchmark.mapOutputAsync                  avgt   10  1202.541 ±  44.432   ns/op
ProductBenchmark.bothMatched                 avgt   10  1224.278 ±  49.114   ns/op
ProductBenchmark.leftMatched                 avgt   10    29.856 ±   0.709   ns/op
ProductBenchmark.rightMatched                avgt   10     5.209 ±   0.112   ns/op

In a nutshell:

  • map* operations are now up to 40% faster (400 bytes fewer allocations on each call)
  • :: is now up to 5% faster (120 bytes fewer allocations on each call)

Dependencies

Don't miss a new finch release

NewReleases is sending notifications on new releases.