[FEA] Add GZIP compression support to parquet writer #14509
Labels
0 - Backlog
In queue waiting for assignment
cuIO
cuIO issue
feature request
New feature or request
libcudf
Affects libcudf (C++/CUDA) code.
Spark
Functionality that helps Spark RAPIDS
Milestone
Is your feature request related to a problem? Please describe.
The parquet format in Apache Spark supports many compression codecs (link), including: none, uncompressed, snappy, gzip, lzo, brotli, lz4, zstd.
cuDF has both internal implementation and an nvCOMP integration to provide compression and decompression codecs. For the parquet format, GZIP compression is DEFLATE plus a header. nvCOMP does not support the deflate version with this header, so the reader still uses the internal gzip decompression implementation. We don't have internal gzip compression implementation. To support GZIP in the PQ writer we would need to use nvCOMP GDEFLATE codec + write the header on our own.
Describe the solution you'd like
Add support for GZIP compressioning to the cuDF parquet writer by adding a header writing implementation and using nvCOMP deflate.
Describe alternatives you've considered
n/a
Additional context
Also see Spark-RAPIDS request here: NVIDIA/spark-rapids#9718
The text was updated successfully, but these errors were encountered: