Skip to content

Commit

Permalink
docs: add info about compression
Browse files Browse the repository at this point in the history
  • Loading branch information
JStickler committed Jan 10, 2025
1 parent efd3ec3 commit e1c20cd
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions docs/sources/configure/bp-configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,14 @@ But what if the application itself generated logs that were out of order? Well,

It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.)

## Use `snappy` compression algorithm

`Snappy` is currently the Loki compression algorithm of choice. It performs much better than `gzip` for speed, but it is not as efficient in storage. This was an acceptable tradeoff for us.

Grafana Labs has found that `gzip` was very good for compression but was very slow, and this was causing slow query responses.

`LZ4` was a good compromise of speed and compression performance. However, there were some issues with non-deterministic output of compressed chunks, where two ingesters compressing the same data would produce a chunk with a different checksum, even though they would still decompress back to the same input data. This was interfering with syncing chunks to reduce duplicates.

## Use `chunk_target_size`

Using `chunk_target_size` instructs Loki to try to fill all chunks to a target _compressed_ size of 1.5MB. These larger chunks are more efficient for Loki to process.
Expand Down

0 comments on commit e1c20cd

Please sign in to comment.