GH-49641: [C++] Fix Lz4HadoopCodec to split large blocks for Hadoop compatibility#49642
GH-49641: [C++] Fix Lz4HadoopCodec to split large blocks for Hadoop compatibility#49642clee704 wants to merge 1 commit intoapache:mainfrom
Conversation
|
|
|
|
|
|
e3502b2 to
74d6b41
Compare
74d6b41 to
d8b71e9
Compare
|
Thanks for submitting this PR. Just for the record: I would not consider this a critical fix, as this is just working around a bug/limitation in another Parquet implementation. |
|
|
pitrou
left a comment
There was a problem hiding this comment.
Thanks! LGTM overall, a suggestion below.
| CheckCodecRoundtrip(c1, c2, data, /*check_reverse=*/false); | ||
| } | ||
|
|
||
| TEST(TestCodecLZ4Hadoop, MultiBlockRoundtrip) { |
There was a problem hiding this comment.
Looks like the 3 additional tests have essentially the same structure and could folded into one for conciseness, could you do that?
|
By the way @clee704, since this is about generating new files, why not use the newer LZ4_RAW which completely solves the Hadoop compatibility problem? |
…doop compatibility Arrow's Lz4HadoopCodec::Compress writes the entire input as a single Hadoop-framed LZ4 block. Hadoop's Lz4Decompressor allocates a fixed 256 KiB output buffer per block (IO_COMPRESSION_CODEC_LZ4_BUFFERSIZE), so any block whose decompressed size exceeds 256 KiB causes an LZ4Exception on the JVM reader. This is a read failure, not data corruption -- the compressed bytes are valid, but Hadoop-based JVM readers cannot decompress them. Fix: split input into blocks of at most 256 KiB uncompressed, each with its own [decompressed_size, compressed_size] big-endian prefix, matching Hadoop's BlockCompressorStream behavior. Arrow's reader (TryDecompressHadoop) already handles multiple blocks. Closes apache#49641. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
d8b71e9 to
97385c8
Compare
Rationale
Lz4HadoopCodec::Compresswrites the entire input as a single Hadoop-framed LZ4 block. Hadoop'sLz4Decompressorallocates a fixed 256 KiB output buffer per block (IO_COMPRESSION_CODEC_LZ4_BUFFERSIZE_DEFAULT), so any block whose decompressed size exceeds 256 KiB causesLZ4Exceptionon JVM readers (parquet-mr + HadoopBlockDecompressorStream).This was found when writing Parquet dictionary pages >256 KiB with LZ4 compression (e.g. 40K unique INT64 values = 320 KiB). The file was written successfully but the JVM reader could not decompress the dictionary page.
This is a read failure, not data corruption — the compressed bytes on disk are valid LZ4. Arrow's own C++ reader decompresses them fine. The JVM reader throws a hard exception; it does not silently return wrong data.
PARQUET-1878 added the Hadoop-compatible codec but only writes one block per page. ARROW-11301 fixed the reader for multi-block Hadoop data, but the writer was never updated.
What changes are included in this PR?
Split input into blocks of ≤ 256 KiB in
Lz4HadoopCodec::Compressand updateMaxCompressedLenfor per-block prefix overhead. Arrow's reader (TryDecompressHadoop) already handles multiple blocks. No behavioral change for data ≤ 256 KiB (still produces a single block, identical output to before).Are these changes tested?
Yes:
MultiBlockRoundtrip: compress→decompress round-trip for sizes 0 to 1 MiBBlockSizeLimit: parses compressed output and asserts every block'sdecompressed_size ≤ 256 KiB. Fails without the fix, passes with it.Are there any user-facing changes?
Parquet files written with
LZ4_HADOOPcompression containing pages >256 KiB will now be readable by JVM-based Parquet readers (parquet-mr + Hadoop). Previously these files causedLZ4Exceptionon the JVM reader. No change for files with pages ≤ 256 KiB.AI-generated code disclosure
This fix was developed with the assistance of an AI coding assistant (GitHub Copilot). The author has reviewed and verified all changes, including validating the fix with standalone tests that confirm the old code fails and the new code passes. The root cause analysis was done by tracing through Arrow's
Lz4HadoopCodec, Hadoop'sBlockDecompressorStream/Lz4Decompressorsource code, and the Parquet page write path.