StreamLZ 1.4.4
dotnet add package StreamLZ --version 1.4.4
NuGet\Install-Package StreamLZ -Version 1.4.4
<PackageReference Include="StreamLZ" Version="1.4.4" />
<PackageVersion Include="StreamLZ" Version="1.4.4" />
<PackageReference Include="StreamLZ" />
paket add StreamLZ --version 1.4.4
#r "nuget: StreamLZ, 1.4.4"
#:package StreamLZ@1.4.4
#addin nuget:?package=StreamLZ&version=1.4.4
#tool nuget:?package=StreamLZ&version=1.4.4
StreamLZ
High-performance LZ compression library for .NET with streaming support.
Features
- Up to 11.8 GB/s decompress (level 8, enwik9), down to 22.2% ratio (level 11, enwik9)
- Simple level scale (1-11) — higher = better ratio, slower
- Streaming — SLZ1 frame format supports files of any size
- Sliding window — cross-block match references for better ratio
- Parallel compress and decompress — automatic multi-threading at L6+ (see Threading Model)
- Async —
CompressFileAsync,DecompressFileAsync,IAsyncDisposableonSlzStream - Validation —
TryDecompress(non-throwing),IsValidFrame, content checksums - Zero allocations on the hot path (pooled scratch buffers)
- Native AOT and trimming compatible
- Targets net8.0 and net10.0
Installation
dotnet add package StreamLZ
Quick Start
using StreamLZ;
// Simplest: compress and decompress byte arrays (SLZ1 framed, self-describing)
byte[] compressed = Slz.CompressFramed(data);
byte[] restored = Slz.DecompressFramed(compressed); // no size tracking needed
// Compress / decompress files
Slz.CompressFile("input.txt", "output.slz");
Slz.DecompressFile("output.slz", "restored.txt");
// Stream-based (any size)
Slz.CompressStream(input, output, level: 6);
// Named compression levels
byte[] fast = Slz.CompressFramed(data, SlzCompressionLevel.Fast);
byte[] max = Slz.CompressFramed(data, SlzCompressionLevel.Maximum);
Compression Levels
| Level | Codec | Matcher | Parallel Compress | Parallel Decompress | Notes |
|---|---|---|---|---|---|
| 1 | Fast | Hash | Fastest compress | ||
| 2 | Fast | Hash | |||
| 3 | Fast | Hash | |||
| 4 | Fast | Hash | |||
| 5 | Fast | Hash | Best Fast ratio | ||
| 6 | High SC | Hash | ✅ | ✅ | Recommended default |
| 7 | High SC | Hash | ✅ | ✅ | |
| 8 | High SC | BT4 | ✅ | ✅ | Best SC ratio |
| 9 | High | Hash | partial | Sliding window | |
| 10 | High | Hash | partial | ||
| 11 | High | BT4 | partial | Maximum ratio |
See Threading Model below for details on how parallelism works at each level.
API
StreamLZ offers three API tiers. Choose based on your use case:
Framed in-memory (simplest — self-describing round-trip)
Uses the SLZ1 frame format. Output includes size metadata so decompression needs no external information. Best for storing/transmitting compressed blobs.
byte[] compressed = Slz.CompressFramed(data);
byte[] restored = Slz.DecompressFramed(compressed);
// Named levels for readability
byte[] fast = Slz.CompressFramed(data, SlzCompressionLevel.Fast);
Raw in-memory (zero-copy — caller manages buffers)
No framing. Caller must track the original size and provide output buffers
(including Slz.SafeSpace extra bytes for decompression). Best for hot paths
where you control the buffer lifecycle.
int bound = Slz.GetCompressBound(data.Length);
byte[] dst = new byte[bound];
int compSize = Slz.Compress(data, dst, level: 3);
byte[] output = new byte[originalSize + Slz.SafeSpace];
Slz.Decompress(compressed, output, originalSize);
// Non-throwing variant for untrusted data
if (Slz.TryDecompress(compressed, output, originalSize, out int written))
// success
Important: Raw and framed formats are not interchangeable. Data compressed
with Compress must be decompressed with Decompress (not DecompressFramed),
and vice versa.
File and stream (any size, SLZ1 framed)
Uses the SLZ1 frame format with a sliding window for cross-block match references. Supports files of any size with bounded memory usage.
// Sync
Slz.CompressFile("input.txt", "output.slz");
Slz.DecompressFile("output.slz", "restored.txt");
Slz.CompressStream(input, output, level: 6);
Slz.DecompressStream(input, output);
// Async
await Slz.CompressFileAsync("input.txt", "output.slz", cancellationToken: ct);
await Slz.DecompressFileAsync("output.slz", "restored.txt", cancellationToken: ct);
// With content checksum for integrity verification
Slz.CompressFile("input.txt", "output.slz", useContentChecksum: true);
// Limit compression threads (for server workloads)
Slz.CompressFile("input.txt", "output.slz", maxThreads: 4);
SlzStream (GZipStream-style wrapper)
// Compress (supports await using for async disposal)
await using var compressStream = new SlzStream(outputStream, CompressionMode.Compress);
inputStream.CopyTo(compressStream);
// Decompress
await using var decompressStream = new SlzStream(inputStream, CompressionMode.Decompress);
decompressStream.CopyTo(outputStream);
// With options
var options = new SlzStreamOptions
{
Level = 9,
UseContentChecksum = true,
LeaveOpen = true
};
await using var stream = new SlzStream(inner, CompressionMode.Compress, options);
Note: Disposing an SlzStream in compress mode without writing any data produces
no output. To get a valid empty SLZ1 stream, write at least one byte, or use
CompressFramed(ReadOnlySpan<byte>.Empty).
Validation
bool valid = Slz.IsValidFrame(compressedData);
bool valid = Slz.IsValidFrame(stream); // rewinds if seekable
JIT warmup (optional)
// Called automatically on first use of Slz. Call explicitly at app
// startup to move the ~15ms JIT cost to a predictable point.
Slz.WarmUp();
Comparison vs LZ4, Snappy, Zstd
enwik9 (1 GB text, 3-run median)
| Compressor | Ratio | Compress | Decompress | Parallel Compress | Parallel Decompress |
|---|---|---|---|---|---|
| Snappy | 50.9% | 556 MB/s | 1,511 MB/s | ||
| LZ4 Fast | 50.9% | 532 MB/s | 4,258 MB/s | ||
| SLZ L1 | 52.3% | 336 MB/s | 5,667 MB/s | ||
| Zstd 1 | 35.7% | 468 MB/s | 1,160 MB/s | ||
| LZ4 Max | 37.2% | 25 MB/s | 4,477 MB/s | ||
| SLZ L5 | 38.2% | 65 MB/s | 4,954 MB/s | ||
| Zstd 3 | 31.2% | 318 MB/s | 1,223 MB/s | ||
| SLZ L6 | 27.8% | 57 MB/s | 10,678 MB/s | ✅ | ✅ |
| Zstd 9 | 27.2% | 71 MB/s | 1,415 MB/s | ||
| SLZ L8 | 27.3% | 26 MB/s | 11,788 MB/s | ✅ | ✅ |
| Zstd 19 | 23.5% | 2.2 MB/s | 1,334 MB/s | ||
| SLZ L11 | 22.2% | 1.5 MB/s | 1,054 MB/s | partial |
silesia (212 MB mixed, 3-run median)
| Compressor | Ratio | Compress | Decompress | Parallel Compress | Parallel Decompress |
|---|---|---|---|---|---|
| Snappy | 48.1% | 752 MB/s | 1,429 MB/s | ||
| LZ4 Fast | 47.4% | 695 MB/s | 4,510 MB/s | ||
| SLZ L1 | 47.1% | 440 MB/s | 5,790 MB/s | ||
| Zstd 1 | 34.5% | 567 MB/s | 1,390 MB/s | ||
| LZ4 Max | 36.3% | 17 MB/s | 4,832 MB/s | ||
| SLZ L5 | 36.4% | 77 MB/s | 5,196 MB/s | ||
| SLZ L6 | 26.7% | 62 MB/s | 9,432 MB/s | ✅ | ✅ |
| Zstd 9 | 27.9% | 81 MB/s | 1,515 MB/s | ||
| Zstd 19 | 24.9% | 3.2 MB/s | 1,052 MB/s | ||
| SLZ L11 | 24.2% | 3.0 MB/s | 1,439 MB/s | partial |
All benchmarks on Intel Arrow Lake-S (Ultra 9 285K), 24-core, .NET 10.
Threading Model
StreamLZ uses different threading strategies depending on the compression level:
L1-L5 (Fast codec): Single-threaded compress and decompress. The high decompress throughput (5+ GB/s) comes from the simple token format, not parallelism.
L6-L8 (High codec, self-contained): Fully parallel. Chunks are grouped (4 × 256KB = 1MB per group) and each group is assigned to one thread. Within a group, chunks are compressed/decompressed sequentially with cross-chunk context, giving the match finder a larger search window. Between groups there are no references, preserving full parallelism across all available cores.
L9-L11 (High codec, sliding window): Compression is single-threaded because chunks reference previous output via a sliding window. Decompression uses a batched two-phase approach that processes chunks in batches of
ProcessorCount(e.g. 24 on a 24-core machine). For each batch:- Phase 1 (parallel):
ReadLzTableruns on all chunks in the batch simultaneously — this decodes the entropy streams (Huffman/tANS) and unpacks offsets, which is the most CPU-intensive part. - Phase 2 (serial):
ProcessLzRunsresolves tokens and copies literals/matches for each chunk in order, since match copies can reference output from earlier chunks.
Then the next batch starts. This yields ~47% faster decompression than fully serial on a 24-core machine.
- Phase 1 (parallel):
Compression thread count can be limited with the maxThreads parameter (e.g. for server workloads). Decompression threading is automatic and cannot be disabled.
License
MIT
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- System.IO.Hashing (>= 9.0.4)
-
net8.0
- System.IO.Hashing (>= 9.0.4)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 1.4.4 | 27 | 4/12/2026 |
| 1.4.3 | 38 | 4/10/2026 |
| 1.4.2 | 33 | 4/10/2026 |
| 1.4.1 | 38 | 4/10/2026 |
| 1.4.0 | 41 | 4/7/2026 |
| 1.3.0 | 33 | 4/7/2026 |
| 1.2.1 | 47 | 4/6/2026 |
| 1.2.0 | 48 | 4/6/2026 |
| 1.1.0 | 50 | 4/6/2026 |
| 1.0.9 | 47 | 3/31/2026 |
| 1.0.8 | 41 | 3/30/2026 |
| 1.0.7 | 37 | 3/30/2026 |
| 1.0.6 | 47 | 3/30/2026 |
| 1.0.5 | 43 | 3/30/2026 |
| 1.0.4 | 47 | 3/29/2026 |
| 1.0.3 | 34 | 3/29/2026 |
| 1.0.2 | 39 | 3/29/2026 |
| 1.0.1 | 47 | 3/27/2026 |
| 1.0.0 | 49 | 3/27/2026 |