artifacthub helm/grafana/loki-simple-scalable 1.4.0
v1.4.0

latest releases: 1.8.11, 1.8.10, 1.8.9...
23 months ago

This is release v1.4.0 of Loki.

Over 130 PR's merged for this release, from 40 different contributors!! We continue to be humbled and thankful for the growing community of contributors and users of Loki. Thank you all so much.

Important Notes

Really, this is important

Before we get into new features, version 1.4.0 brings with it the first (that we are aware of) upgrade dependency.

We have created a dedicated page for upgrading Loki in the operations section of the docs

The docker image tag naming was changed, the starting in 1.4.0 docker images no longer have the v prefix: grafana/loki:1.4.0

Also you should be aware we are now pruning old master-xxxxx docker images from docker hub, currently anything older than 90 days is removed. We will never remove released versions of Loki

Notable Features

Please checkout the CHANGELOG for the full list of changes, however here are some of the most notable:

  • 1661 cyriltovena: Frontend & Querier query statistics instrumentation.

The API now returns a plethora of stats into the work Loki performed to execute your query, eventually this will be displayed in some form in Grafana to help users better understand how "expensive" their queries are. Our goal here initially was to better instrument the recent work done in v1.3.0 on query parallelization and to better understand the performance of each part of Loki. In the future we are looking at additional ideas to provide feedback to users to tailor their queries for better performance.

  • 1652 cyriltovena: --dry-run Promtail.
  • 1649 cyriltovena: Pipe data to Promtail

This is a long overdue addition to Promtail which can help setup and debug pipelines, with these new features you can do this to feed a single log line into Promtail:

echo -n 'level=debug msg="test log (200)"' | cmd/promtail/promtail -config.file=cmd/promtail/promtail-local-config.yaml --dry-run -log.level=debug 2>&1 | sed 's/^.*stage/stage/g'

-log.level=debug 2>&1 | sed 's/^.*stage/stage/g are added to enable debug output, direct the output to stdout, and a sed filter to remove some noise from the log lines.

The stdin functionality also works without --dry-run allowing you to feed any logs into Promtail via stdin and send them to Loki

  • 1677 owen-d: Literal Expressions in LogQL
  • 1662 owen-d: Binary operators in LogQL

These two extensions to LogQL now let you execute queries like this:

* `sum(rate({app="foo"}[5m])) * 2` 
* `sum(rate({app="foo"}[5m]))/1e6` 
  • 1678 slim-bean: promtail: metrics pipeline count all log lines

Now you can get per-stream line counts as a metric from promtail, useful for seeing which applications log the most

- metrics:
    line_count_total:
      config:
        action: inc
        match_all: true
      description: A running counter of all lines with their corresponding
        labels
      type: Counter
  • 1558 owen-d: ingester.max-chunk-age
  • 1572 owen-d: Feature/query ingesters within

These two configs let you set the max time a chunk can stay in memory in Loki, this is useful to keep memory usage down as well as limit potential loss of data if ingesters crash. Combine this with the query_ingesters_within config and you can have your queriers skip asking the ingesters for data which you know won't still be in memory (older than max_chunk_age).

NOTE Do not set the max_chunk_age too small, the default of 1h is probably a good point for most people. Loki does not perform well when you flush many small chunks (such as when your logs have too much cardinality), setting this lower than 1h risks flushing too many small chunks.

  • 1581 slim-bean: Add sleep to canary reconnect on error

This isn't a feature but it's an important fix, this is the second time our canaries have tried to DDOS our Loki clusters so you should update to prevent them from trying to attack you. Aggressive little things these canaries...

  • 1840 slim-bean: promtail: Retry 429 rate limit errors from Loki, increase default retry limits
  • 1845 wardbekker: throw exceptions on HTTPTooManyRequests and HTTPServerError so FluentD will retry

These two PR's change how 429 HTTP Response codes are handled (Rate Limiting), previously these responses were dropped, now they will be retried for these clients

* Promtail
* Docker logging driver
* Fluent Bit
* Fluentd

This pushes the failure to send logs to two places. First is the retry limits. The defaults in promtail (and thus also the Docker logging driver and Fluent Bit, which share the same underlying code) will retry 429s (and 500s) on an exponential backoff for up to about 8.5 mins on the default configurations. (This can be changed; see the config docs for more info.)

The second place would be the log file itself. At some point, most log files roll based on size or time. Promtail makes an attempt to read a rolled log file but will only try once. If you are very sensitive to lost logs, give yourself really big log files with size-based rolling rules and increase those retry timeouts. This should protect you from Loki server outages or network issues.

Installation:

The components of Loki are currently distributed in plain binary form and as Docker container images. Choose what fits your use-case best.

Docker container:

$ docker pull "grafana/loki:1.4.0"
$ docker pull "grafana/promtail:1.4.0"

Binary

We provide pre-compiled binary executables for the most common operating systems and architectures.
Choose from the assets below for the application and architecture matching your system.
Example for Loki on the linux operating system and amd64 architecture:

$ curl -O -L "https://github.com/grafana/loki/releases/download/v1.4.0/loki-linux-amd64.zip"
# extract the binary
$ unzip "loki-linux-amd64.zip"
# make sure it is executable
$ chmod a+x "loki-linux-amd64"

Don't miss a new loki-simple-scalable release

NewReleases is sending notifications on new releases.